WorldWideScience

Sample records for robot vision system

  1. A Fast Vision System for Soccer Robot

    Directory of Open Access Journals (Sweden)

    Tianwu Yang

    2012-01-01

    Full Text Available This paper proposes a fast colour-based object recognition and localization for soccer robots. The traditional HSL colour model is modified for better colour segmentation and edge detection in a colour coded environment. The object recognition is based on only the edge pixels to speed up the computation. The edge pixels are detected by intelligently scanning a small part of whole image pixels which is distributed over the image. A fast method for line and circle centre detection is also discussed. For object localization, 26 key points are defined on the soccer field. While two or more key points can be seen from the robot camera view, the three rotation angles are adjusted to achieve a precise localization of robots and other objects. If no key point is detected, the robot position is estimated according to the history of robot movement and the feedback from the motors and sensors. The experiments on NAO and RoboErectus teen-size humanoid robots show that the proposed vision system is robust and accurate under different lighting conditions and can effectively and precisely locate robots and other objects.

  2. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  3. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  4. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  5. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  6. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  7. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  8. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  9. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  11. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  12. Robot path planning using expert systems and machine vision

    Science.gov (United States)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  13. Computer Vision for Artificially Intelligent Robotic Systems

    Science.gov (United States)

    Ma, Chialo; Ma, Yung-Lung

    1987-04-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main

  14. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  15. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  16. Active Vision for Sociable Robots

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2001-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  17. System and method for controlling a vision guided robot assembly

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  18. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  19. A remote assessment system with a vision robot and wearable sensors.

    Science.gov (United States)

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  20. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  1. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  2. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  3. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  4. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  5. KNOWLEDGE-BASED ROBOT VISION SYSTEM FOR AUTOMATED PART HANDLING

    Directory of Open Access Journals (Sweden)

    J. Wang

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper discusses an algorithm incorporating a knowledge-based vision system into an industrial robot system for handling parts intelligently. A continuous fuzzy controller was employed to extract boundary information in a computationally efficient way. The developed algorithm for on-line part recognition using fuzzy logic is shown to be an effective solution to extract the geometric features of objects. The proposed edge vector representation method provides enough geometric information and facilitates the object geometric reconstruction for gripping planning. Furthermore, a part-handling model was created by extracting the grasp features from the geometric features.

    AFRIKAANSE OPSOMMING: Hierdie artikel beskryf ‘n kennis-gebaseerde visiesisteemalgoritme wat in ’n industriёle robotsisteem ingesluit word om sodoende intelligente komponenthantering te bewerkstellig. ’n Kontinue wasige beheerder is gebruik om allerlei objekinligting deur middel van ’n effektiewe berekeningsmetode te bepaal. Die ontwikkelde algoritme vir aan-lyn komponentherkenning maak gebruik van wasige logika en word bewys as ’n effektiewe metode om geometriese inligting van objekte te bepaal. Die voorgestelde grensvektormetode verskaf voldoende inligting en maak geometriese rekonstruksie van die objek moontlik om greepbeplanning te kan doen. Voorts is ’n komponenthanteringsmodel ontwikkel deur die grypkenmerke af te lei uit die geometriese eienskappe.

  6. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  7. Vision-based robotic system for object agnostic placing operations

    DEFF Research Database (Denmark)

    Rofalis, Nikolaos; Nalpantidis, Lazaros; Andersen, Nils Axel

    2016-01-01

    Industrial robots are part of almost all modern factories. Even though, industrial robots nowadays manipulate objects of a huge variety in different environments, exact knowledge about both of them is generally assumed. The aim of this work is to investigate the ability of a robotic system to ope...... to the system, neither for the objects nor for the placing box. The experimental evaluation of the developed robotic system shows that a combination of seemingly simple modules and strategies can provide effective solution to the targeted problem....... to operate within an unknown environment manipulating unknown objects. The developed system detects objects, finds matching compartments in a placing box, and ultimately grasps and places the objects there. The developed system exploits 3D sensing and visual feature extraction. No prior knowledge is provided...

  8. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  9. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  10. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  11. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  12. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  13. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  14. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  15. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  16. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  17. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  18. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  19. Development of a teaching system for an industrial robot using stereo vision

    Science.gov (United States)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  20. Vision servo of industrial robot: A review

    Science.gov (United States)

    Zhang, Yujin

    2018-04-01

    Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.

  1. Examples of design and achievement of vision systems for mobile robotics applications

    Science.gov (United States)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  2. A Model Vision of Sorting System Application Using Robotic Manipulator

    Directory of Open Access Journals (Sweden)

    Maralo Sinaga

    2010-08-01

    Full Text Available Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system in the Moduler Processing System (MPS laboratory which consists of four integrated stations of distribution, testing, processing and handling with a new image processing feature. Existing sorting method uses a set of inductive, capacitive and optical sensors do differentiate object color. This paper presents a mechatronics color sorting system solution with the application of image processing. Supported by OpenCV, image processing procedure senses the circular objects in an image captured in realtime by a webcam and then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator (Mitsubishi Movemaster RV-M1 that does pick-and-place mechanism. Extensive testing proves that this color based object sorting system works 100% accurate under ideal condition in term of adequate illumination, circular objects’ shape and color. The circular objects tested for sorting are silver, red and black. For non-ideal condition, such as unspecified color the accuracy reduces to 80%.

  3. Machine Learning for Robotic Vision

    OpenAIRE

    Drummond, Tom

    2018-01-01

    Machine learning is a crucial enabling technology for robotics, in particular for unlocking the capabilities afforded by visual sensing. This talk will present research within Prof Drummond’s lab that explores how machine learning can be developed and used within the context of Robotic Vision.

  4. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  5. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    Directory of Open Access Journals (Sweden)

    Xun Chai

    2015-04-01

    Full Text Available Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  6. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    Science.gov (United States)

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  7. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    Science.gov (United States)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  8. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  9. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  10. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting.

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-04

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell's natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  11. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  12. Vision Guided Intelligent Robot Design And Experiments

    Science.gov (United States)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  13. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  14. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Science.gov (United States)

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  15. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    Science.gov (United States)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two

  16. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  17. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  18. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  19. A Practical Solution Using A New Approach To Robot Vision

    Science.gov (United States)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write

  20. Technique of Substantiating Requirements for the Vision Systems of Industrial Robotic Complexes

    Directory of Open Access Journals (Sweden)

    V. Ya. Kolyuchkin

    2015-01-01

    Full Text Available In references, there is a lack of approaches to describe the justified technical requirements for the vision systems (VS of industrial robotics complexes (IRC. Therefore, an objective of the work is to develop a technique that allows substantiating requirements for the main quality indicators of VS, functioning as a part of the IRC.The proposed technique uses a model representation of VS, which, as a part of the IRC information system, sorts the objects in the work area, as well as measures their linear and angular coordinates. To solve the problem of statement there is a proposal to define the target function of a designed IRC as a dependence of the IRC indicator efficiency on the VS quality indicators. The paper proposes to use, as an indicator of the IRC efficiency, the probability of a lack of fault products when manufacturing. Based on the functions the VS perform as a part of the IRC information system, the accepted indicators of VS quality are as follows: a probability of the proper recognition of objects in the working IRC area, and confidential probabilities of measuring linear and angular orientation coordinates of objects with the specified values of permissible error. Specific values of these errors depend on the orientation errors of working bodies of manipulators that are a part of the IRC. The paper presents mathematical expressions that determine the functional dependence of the probability of a lack of fault products when manufacturing on the VS quality indicators and the probability of failures of IRC technological equipment.The offered technique for substantiating engineering requirements for the VS of IRC has novelty. The results obtained in this work can be useful for professionals involved in IRC VS development, and, in particular, in development of VS algorithms and software.

  1. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  2. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  3. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  4. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    Science.gov (United States)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  5. Image processor of model-based vision system for assembly robots

    International Nuclear Information System (INIS)

    Moribe, H.; Nakano, M.; Kuno, T.; Hasegawa, J.

    1987-01-01

    A special purpose image preprocessor for the visual system of assembly robots has been developed. The main function unit is composed of lookup tables to utilize the advantage of semiconductor memory for large scale integration, high speed and low price. More than one unit may be operated in parallel since it is designed on the standard IEEE 796 bus. The operation time of the preprocessor in line segment extraction is usually 200 ms per 500 segments, though it differs according to the complexity of scene image. The gray-scale visual system supported by the model-based analysis program using the extracted line segments recognizes partially visible or overlapping industrial workpieces, and detects these locations and orientations

  6. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  7. Manifold learning in machine vision and robotics

    Science.gov (United States)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  8. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  9. Robot bicolor system

    Science.gov (United States)

    Yamaba, Kazuo

    1999-03-01

    In case of robot vision, most important problem is the processing speed of acquiring and analyzing images are less than the speed of execution of the robot. In an actual robot color vision system, it is considered that the system should be processed at real time. We guessed this problem might be solved using by the bicolor analysis technique. We have been testing a system which we hope will give us insight to the properties of bicolor vision. The experiment is used the red channel of a color CCD camera and an image from a monochromatic camera to duplicate McCann's theory. To mix the two signals together, the mono image is copied into each of the red, green and blue memory banks of the image processing board and then added the red image to the red bank. On the contrary, pure color images, red, green and blue components are obtained from the original bicolor images in the novel color system after the scaling factor is added to each RGB image. Our search for a bicolor robot vision system was entirely successful.

  10. Construction of the Control System of Cleaning Robots with Vision Guidance

    Directory of Open Access Journals (Sweden)

    Tian-Syung Lan

    2013-01-01

    Full Text Available The study uses Kinect, modern and depth detectable photography equipment to detect objects on the ground and above the ground. The data collected is used to construct a model on ground level, that is, used lead automatic guiding vehicle. The core of the vehicle uses a PIC18F4520 microchip. Bluetooth wireless communication is adopted for remote connection to a computer, which is used to control the vehicles remotely. Operators send movement command to automatic guiding vehicle through computer. Once the destination point is identified, the vehicle lead is forward. The guiding process will map out a path that directs the vehicle to the destination and void any obstacles. The study is based on existing cleaning robots that are available. Aside from fixed point movement, through data analysis, the system is also capable of identifying objects that are not supposed to appear on the ground, such as aluminum cans. By configuring the destination to aluminum cans, the automatic guiding vehicle will lead to a can and pick it up. Such action is the realization of cleaning function.

  11. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  12. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  13. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  14. Augmented models for improving vision control of a mobile robot

    DEFF Research Database (Denmark)

    Andersen, Gert Lysgaard; Christensen, Anders C.; Ravn, Ole

    1994-01-01

    obtain good performance even when using standard low cost equipment and a comparatively low sampling rate. The plant model is a compound of kinematic, dynamic and sensor submodels, all integrated into a discrete state space representation. An intelligent strategy is applied for the vision sensor......This paper describes the modelling phases for the design of a path tracking vision controller for a three wheeled mobile robot. It is shown that, by including the dynamic characteristics of vision and encoder sensors and implementing the total system in one multivariable control loop, one can...

  15. ROBERT autonomous navigation robot with artificial vision

    International Nuclear Information System (INIS)

    Cipollini, A.; Meo, G.B.; Nanni, V.; Rossi, L.; Taraglio, S.; Ferjancic, C.

    1993-01-01

    This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail

  16. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System

    Directory of Open Access Journals (Sweden)

    Defeng Wu

    2016-08-01

    Full Text Available A robot-based three-dimensional (3D measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  17. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  18. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  19. Beyond speculative robot ethics: A vision assessment study on the future of the robotic caretaker

    NARCIS (Netherlands)

    Plas, A.P. van der; Smits, M.; Wehrmann, C.

    2010-01-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to

  20. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  1. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    Science.gov (United States)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  2. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  3. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  4. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  5. Active Vision for Humanoid Robots

    NARCIS (Netherlands)

    Wang, X.

    2015-01-01

    Human perception is an active process. By altering its viewpoint rather than passively observing surroundings and by operating on sequences of images rather than on a single frame, the human visual system has the ability to explore the most relevant information based on knowledge, therefore when

  6. Fiscal 1998 achievement report on regional consortium research and development project. Venture business fostering regional consortium--Creation of key industries (Development of Task-Oriented Robot Control System TORCS based on versatile 3-dimensional vision system VVV--Vertical Volumetric Vision); 1998 nendo sanjigen shikaku system VVV wo mochiita task shikogata robot seigyo system TORCS no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Research is conducted for the development of a highly autonomous robot control system TORCS for the purpose of realizing an automated, unattended manufacturing process. In the development of an interface, an indicating function is built which easily adds or removes job attributes relative to given shape data. In the development of a 3-dimensional vision system VVV, a camera set and a new range finder are manufactured for ranging and recognition, the latter being an improvement from the conventional laser-aided range finder TDS. A 3-dimensional image processor is developed, which picks up pictures at a speed approximately 8 times higher than that of the conventional type. In the development of orbit calculating software programs, a job planner, an operation planner, and a vision planner are prepared. A robot program which is necessary for robot operation is also prepared. In an evaluation test involving a simulated casting line, the pick-and-place concept is successfully implemented for several kinds of cast articles positioned at random on a conveyer in motion. Difference in environmental conditions between manufacturing sites is not pursued in this paper on the ground that such should be discussed on the case-by-case basis. (NEDO)

  7. Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.

    Science.gov (United States)

    van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline

    2010-11-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.

  8. Cherry Picking Robot Vision Recognition System Based on OpenCV

    Directory of Open Access Journals (Sweden)

    Zhang Qi Rong

    2016-01-01

    Full Text Available Through OpenCV function, the cherry in a natural environment image after image preprocessing, color recognition, threshold segmentation, morphological filtering, edge detection, circle Hough transform, you can draw the cherry’s center and circular contour, to carry out the purpose of the machine picking. The system is simple and effective.

  9. Embedding visual routines in AnaFocus' Eye-RIS Vision Systems for closing the perception to action loop in roving robots

    Science.gov (United States)

    Jiménez-Marrufo, A.; Caballero-García, D. J.

    2011-05-01

    The purpose of the current paper is to describe how different visual routines can be developed and embedded in the AnaFocus' Eye-RIS Vision System on Chip (VSoC) to close the perception to action loop within the roving robots developed under the framework of SPARK II European project. The Eye-RIS Vision System on Chip employs a bio-inspired architecture where image acquisition and processing are truly intermingled and the processing itself is carried out in two steps. At the first step, processing is fully parallel owing to the concourse of dedicated circuit structures which are integrated close to the sensors. At the second step, processing is realized on digitally-coded information data by means of digital processors. All these capabilities make the Eye-RIS VSoC very suitable for the integration within small robots in general, and within the robots developed by the SPARK II project in particular. These systems provide with image-processing capabilities and speed comparable to high-end conventional vision systems without the need for high-density image memory and intensive digital processing. As far as perception is concerned, current perceptual schemes are often based on information derived from visual routines. Since real world images are quite complex to be processed for perceptual needs with traditional approaches, more computationally feasible algorithms are required to extract the desired features from the scene in real time, to efficiently proceed with the consequent action. In this paper the development of such algorithms and their implementation taking full advantage of the sensing-processing capabilities of the Eye-RIS VSoC are described.

  10. An Adaptable Robot Vision System Performing Manipulation Actions With Flexible Objects

    DEFF Research Database (Denmark)

    Bodenhagen, Leon; Fugl, Andreas R.; Jordt, Andreas

    2014-01-01

    system should be viewed as a library of new technologies that have been proven to work in close to industrial conditions. As a rather basic, but necessary part, we provide a technology for determining the shape of the object when passing on, e. g., a conveyor belt prior to being handled. The main......This paper describes an adaptable system which is able to perform manipulation operations (such as Peg-in-Hole or Laying-Down actions) with flexible objects. As such objects easily change their shape significantly during the execution of an action, traditional strategies, e. g., for solve path......, operating in real-time. Simulations have been used to bootstrap the learning of optimal actions, which are subsequently improved through real-world executions. To achieve reproducible results, we demonstrate this for casted silicone test objects of regular shape. Note to Practitioners-The aim of this work...

  11. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  12. Robot Control for Dynamic Environment Using Vision and Autocalibration

    DEFF Research Database (Denmark)

    Larsen, Thomas Dall; Lildballe, Jacob; Andersen, Nils Axel

    1997-01-01

    To enhance flexibility and extend the area of applications for robotic systems, it is important that the systems are capable ofhandling uncertainties and respond to (random) human behaviour.A vision systemmust very often be able to work in a dynamical ``noisy'' world where theplacement ofobjects...... can vary within certain restrictions. Furthermore it would be useful ifthe system is able to recover automatically after serious changes have beenapplied, for instance if the camera has been moved.In this paper an implementationof such a system is described. The system is a robotcapable of playing...

  13. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  14. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  15. Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision

    Directory of Open Access Journals (Sweden)

    SZABO, R.

    2015-05-01

    Full Text Available The goal of this paper is to present a stereo computer vision algorithm intended to control a robotic arm. Specific points on the robot joints are marked and recognized in the software. Using a dedicated set of mathematic equations, the movement of the robot is continuously computed and monitored with webcams. Positioning error is finally analyzed.

  16. 75 FR 36456 - Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision...

    Science.gov (United States)

    2010-06-25

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc.), Security... accurate information concerning the securities of Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc...

  17. A cognitive approach to vision for a mobile robot

    Science.gov (United States)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both

  18. Development of Vision Control Scheme of Extended Kalman filtering for Robot's Position Control

    International Nuclear Information System (INIS)

    Jang, W. S.; Kim, K. S.; Park, S. I.; Kim, K. Y.

    2003-01-01

    It is very important to reduce the computational time in estimating the parameters of vision control algorithm for robot's position control in real time. Unfortunately, the batch estimation commonly used requires too murk computational time because it is iteration method. So, the batch estimation has difficulty for robot's position control in real time. On the other hand, the Extended Kalman Filtering(EKF) has many advantages to calculate the parameters of vision system in that it is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm for the robot's vision control in real time. The vision system model used in this study involves six parameters to account for the inner(orientation, focal length etc) and outer (the relative location between robot and camera) parameters of camera. Then, EKF has been first applied to estimate these parameters, and then with these estimated parameters, also to estimate the robot's joint angles used for robot's operation. finally, the practicality of vision control scheme based on the EKF has been experimentally verified by performing the robot's position control

  19. Vision Based Tracker for Dart-Catching Robot

    OpenAIRE

    Linderoth, Magnus; Robertsson, Anders; Åström, Karl; Johansson, Rolf

    2009-01-01

    This paper describes how high-speed computer vision can be used in a motion control application. The specific application investigated is a dart catching robot. Computer vision is used to detect a flying dart and a filtering algorithm predicts its future trajectory. This will give data to a robot controller allowing it to catch the dart. The performance of the implemented components indicates that the dart catching application can be made to work well. Conclusions are also made about what fea...

  20. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  1. Modeling and Implementation of Omnidirectional Soccer Robot with Wide Vision Scope Applied in Robocup-MSL

    Directory of Open Access Journals (Sweden)

    Mohsen Taheri

    2010-04-01

    Full Text Available The purpose of this paper is to design and implement a middle size soccer robot to conform RoboCup MSL league. First, according to the rules of RoboCup, we design the middle size soccer robot, The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. And this soccer robot equips the laptop computer system and interface circuits to make decisions. In fact, the omnidirectional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and
    target tracking. The boundary-following algorithm (BFA is applied to find the important features of the field. We utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry
    systems are fused for robust selflocalization. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. In the control strategies, we present three state modes, which include the Attack Strategy, Defense Strategy and Intercept Strategy. The methods have been tested in the many Robocup competition field middle size robots.

  2. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  3. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  4. Robotics, vision and control fundamental algorithms in Matlab

    CERN Document Server

    Corke, Peter

    2017-01-01

    Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. For over 20 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and compu...

  5. EnViSoRS: Enhanced Vision System for Robotic Surgery. A User-Defined Safety Volume Tracking to Minimize the Risk of Intraoperative Bleeding

    Directory of Open Access Journals (Sweden)

    Veronica Penza

    2017-05-01

    Full Text Available In abdominal surgery, intraoperative bleeding is one of the major complications that affect the outcome of minimally invasive surgical procedures. One of the causes is attributed to accidental damages to arteries or veins, and one of the possible risk factors falls on the surgeon’s skills. This paper presents the development and application of an Enhanced Vision System for Robotic Surgery (EnViSoRS, based on a user-defined Safety Volume (SV tracking to minimize the risk of intraoperative bleeding. It aims at enhancing the surgeon’s capabilities by providing Augmented Reality (AR assistance toward the protection of vessels from injury during the execution of surgical procedures with a robot. The core of the framework consists in (i a hybrid tracking algorithm (LT-SAT tracker that robustly follows a user-defined Safety Area (SA in long term; (ii a dense soft tissue 3D reconstruction algorithm, necessary for the computation of the SV; (iii AR features for visualization of the SV to be protected and of a graphical gage indicating the current distance between the instruments and the reconstructed surface. EnViSoRS was integrated with a commercial robotic surgical system (the dVRK system for testing and validation. The experiments aimed at demonstrating the accuracy, robustness, performance, and usability of EnViSoRS during the execution of a simulated surgical task on a liver phantom. Results show an overall accuracy in accordance with surgical requirements (<5 mm, and high robustness in the computation of the SV in terms of precision and recall of its identification. The optimization strategy implemented to speed up the computational time is also described and evaluated, providing AR features update rate up to 4 fps, without impacting the real-time visualization of the stereo endoscopic video. Finally, qualitative results regarding the system usability indicate that the proposed system integrates well with the commercial surgical robot and

  6. Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots

    OpenAIRE

    Pérez Sala, Xavier

    2010-01-01

    We propose a robust system for automatic Robot Navigation in uncontrolled en- vironments. The system is composed by three main modules: the Arti cial Vision module, the Reinforcement Learning module, and the behavior control module. The aim of the system is to allow a robot to automatically nd a path that arrives to a pre xed goal. Turn and straight movements in uncontrolled environments are automatically estimated and controlled using the proposed modules. The Arti cial Vi...

  7. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  8. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  9. 9th International Conference on Robotics, Vision, Signal Processing & Power Applications

    CERN Document Server

    Iqbal, Shahid; Teoh, Soo; Mustaffa, Mohd

    2017-01-01

     The proceeding is a collection of research papers presented, at the 9th International Conference on Robotics, Vision, Signal Processing & Power Applications (ROVISP 2016), by researchers, scientists, engineers, academicians as well as industrial professionals from all around the globe to present their research results and development activities for oral or poster presentations. The topics of interest are as follows but are not limited to:   • Robotics, Control, Mechatronics and Automation • Vision, Image, and Signal Processing • Artificial Intelligence and Computer Applications • Electronic Design and Applications • Telecommunication Systems and Applications • Power System and Industrial Applications • Engineering Education.

  10. 8th International Conference on Robotic, Vision, Signal Processing & Power Applications

    CERN Document Server

    Mustaffa, Mohd

    2014-01-01

    The proceeding is a collection of research papers presented, at the 8th International Conference on Robotics, Vision, Signal Processing and Power Applications (ROVISP 2013), by researchers, scientists, engineers, academicians as well as industrial professionals from all around the globe. The topics of interest are as follows but are not limited to: • Robotics, Control, Mechatronics and Automation • Vision, Image, and Signal Processing • Artificial Intelligence and Computer Applications • Electronic Design and Applications • Telecommunication Systems and Applications • Power System and Industrial Applications  

  11. A novel method of robot location using RFID and stereo vision

    Science.gov (United States)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  12. Low Vision Enhancement System

    Science.gov (United States)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  13. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  14. [Quality system Vision 2000].

    Science.gov (United States)

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  15. Semiautonomous teleoperation system with vision guidance

    Science.gov (United States)

    Yu, Wai; Pretlove, John R. G.

    1998-12-01

    This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.

  16. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    Directory of Open Access Journals (Sweden)

    Il Jae Lee

    2009-09-01

    Full Text Available In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor.

  17. Learning Spatial Object Localization from Vision on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jürgen Leitner

    2012-12-01

    Full Text Available We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN and Genetic Programming (GP, are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

  18. Robot-laser system

    International Nuclear Information System (INIS)

    Akeel, H.A.

    1987-01-01

    A robot-laser system is described for providing a laser beam at a desired location, the system comprising: a laser beam source; a robot including a plurality of movable parts including a hollow robot arm having a central axis along which the laser source directs the laser beam; at least one mirror for reflecting the laser beam from the source to the desired location, the mirror being mounted within the robot arm to move therewith and relative thereto to about a transverse axis that extends angularly to the central axis of the robot arm; and an automatic programmable control system for automatically moving the mirror about the transverse axis relative to and in synchronization with movement of the robot arm to thereby direct the laser beam to the desired location as the arm is moved

  19. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  20. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  1. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2005-09-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  2. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  3. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  4. State of the art of robotic surgery related to vision: brain and eye applications of newly available devices

    Directory of Open Access Journals (Sweden)

    Nuzzi R

    2018-02-01

    Full Text Available Raffaele Nuzzi, Luca Brusasco Department of Surgical Sciences, Eye Clinic, University of Torino, Turin, Italy Background: Robot-assisted surgery has revolutionized many surgical subspecialties, mainly where procedures have to be performed in confined, difficult to visualize spaces. Despite advances in general surgery and neurosurgery, in vivo application of robotics to ocular surgery is still in its infancy, owing to the particular complexities of microsurgery. The use of robotic assistance and feedback guidance on surgical maneuvers could improve the technical performance of expert surgeons during the initial phase of the learning curve. Evidence acquisition: We analyzed the advantages and disadvantages of surgical robots, as well as the present applications and future outlook of robotics in neurosurgery in brain areas related to vision and ophthalmology. Discussion: Limitations to robotic assistance remain, that need to be overcome before it can be more widely applied in ocular surgery. Conclusion: There is heightened interest in studies documenting computerized systems that filter out hand tremor and optimize speed of movement, control of force, and direction and range of movement. Further research is still needed to validate robot-assisted procedures. Keywords: robotic surgery related to vision, robots, ophthalmological applications of robotics, eye and brain robots, eye robots

  5. Vision-Based Interfaces Applied to Assistive Robots

    Directory of Open Access Journals (Sweden)

    Elisa Perez

    2013-02-01

    Full Text Available This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.

  6. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  7. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    Science.gov (United States)

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  8. The development of advanced robotics for the nuclear industry. The development of remote sensing robot vision system based on multi-sensor fusion

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Park, Gil Heum; Kim, Su Jung; Ryu, Kang Su; Bae, Sung Ho; Baek, Young Mok; Kim, Kyung Ho; Kim, Byung Sun; Sung, Jong Gyu; Kang, Bong Su; Lee, Jae Ho; Woo, Chang Kyun [Kyungpook National University, Taegu (Korea, Republic of)

    1995-08-01

    We developed the automatic recognition system for recognizing digits and estimating the position of the needle in the digital and analog instruments, respectively. As we use the threshold, the region labeling and the projection technique for the feature extraction, we confirm that the recognition system operates well under the various lightening conditions. The recognition rate of 10 analog meters` needle deviation and 20 digital meters` digits is found to be 100%. Also, we incorporate the neural network structure for recognizing the names of instruments. This neural network can achieve 90% accuracy for analog meters and 95% accuracy for digital instruments. To estimate the 3-dimensional depth information from stereo images, we proposed the adaptive stochastic relaxation technique for guaranteeing more accurate depth resolution. For the IR image processing, we included the study on the edge detection, the histogram equalization and the capability of intensity reversal. Finally we construct the image database system with B{sup +} tree, in which database searching can be performed with ease and efficiency regardless of the size of image date. (author). 33 refs.

  9. Vision-Based Recognition of Activities by a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Mounîm A. El-Yacoubi

    2015-12-01

    Full Text Available We present an autonomous assistive robotic system for human activity recognition from video sequences. Due to the large variability inherent to video capture from a non-fixed robot (as opposed to a fixed camera, as well as the robot's limited computing resources, implementation has been guided by robustness to this variability and by memory and computing speed efficiency. To accommodate motion speed variability across users, we encode motion using dense interest point trajectories. Our recognition model harnesses the dense interest point bag-of-words representation through an intersection kernel-based SVM that better accommodates the large intra-class variability stemming from a robot operating in different locations and conditions. To contextually assess the engine as implemented in the robot, we compare it with the most recent approaches of human action recognition performed on public datasets (non-robot-based, including a novel approach of our own that is based on a two-layer SVM-hidden conditional random field sequential recognition model. The latter's performance is among the best within the recent state of the art. We show that our robot-based recognition engine, while less accurate than the sequential model, nonetheless shows good performances, especially given the adverse test conditions of the robot, relative to those of a fixed camera.

  10. Vision-Based Robot Following Using PID Control

    OpenAIRE

    Chandra Sekhar Pati; Rahul Kala

    2017-01-01

    Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential) controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to ...

  11. An automated miniature robotic vehicle inspection system

    Energy Technology Data Exchange (ETDEWEB)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter [Centre for Ultrasonic Engineering, University of Strathclyde, 204 George Street, Glasgow, G1 1XW (United Kingdom)

    2014-02-18

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.

  12. An automated miniature robotic vehicle inspection system

    International Nuclear Information System (INIS)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter

    2014-01-01

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software

  13. Embedded Visual System and its Applications on Robots

    CERN Document Server

    Xu, De

    2010-01-01

    Embedded vision systems such as smart cameras have been rapidly developed recently. Vision systems have become smaller and lighter, but their performance has improved. The algorithms in embedded vision systems have their specifications limited by frequency of CPU, memory size, and architecture. The goal of this e-book is to provide a an advanced reference work for engineers, researchers and scholars in the field of robotics, machine vision, and automation and to facilitate the exchange of their ideas, experiences and views on embedded vision system models. The effectiveness for all methods is

  14. Machine vision system for remote inspection in hazardous environments

    International Nuclear Information System (INIS)

    Mukherjee, J.K.; Krishna, K.Y.V.; Wadnerkar, A.

    2011-01-01

    Visual Inspection of radioactive components need remote inspection systems for human safety and equipment (CCD imagers) protection from radiation. Elaborate view transport optics is required to deliver images at safe areas while maintaining fidelity of image data. Automation of the system requires robots to operate such equipment. A robotized periscope has been developed to meet the challenge of remote safe viewing and vision based inspection. (author)

  15. Robotics and remote systems applications

    International Nuclear Information System (INIS)

    Rabold, D.E.

    1996-01-01

    This article is a review of numerous remote inspection techniques in use at the Savannah River (and other) facilities. These include: (1) reactor tank inspection robot, (2) californium waste removal robot, (3) fuel rod lubrication robot, (4) cesium source manipulation robot, (5) tank 13 survey and decontamination robots, (6) hot gang valve corridor decontamination and junction box removal robots, (7) lead removal from deionizer vessels robot, (8) HB line cleanup robot, (9) remote operation of a front end loader at WIPP, (10) remote overhead video extendible robot, (11) semi-intelligent mobile observing navigator, (12) remote camera systems in the SRS canyons, (13) cameras and borescope for the DWPF, (14) Hanford waste tank camera system, (15) in-tank precipitation camera system, (16) F-area retention basin pipe crawler, (17) waste tank wall crawler and annulus camera, (18) duct inspection, and (19) deionizer resin sampling

  16. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  17. Implementation of a robotic flexible assembly system

    Science.gov (United States)

    Benton, Ronald C.

    1987-01-01

    As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.

  18. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  19. Intelligent manipulation technique for multi-branch robotic systems

    Science.gov (United States)

    Chen, Alexander Y. K.; Chen, Eugene Y. S.

    1990-01-01

    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.

  20. Robotic anesthesia - A vision for the future of anesthesia

    OpenAIRE

    Hemmerling, Thomas M.; Taddei, Riccardo; Wehbe, Mohamad; Morse, Joshua; Cyr, Shantale; Zaouter, Cedrick

    2011-01-01

    Summary This narrative review describes a rationale for robotic anesthesia. It offers a first classification of robotic anesthesia by separating it into pharmacological robots and robots for aiding or replacing manual gestures. Developments in closed loop anesthesia are outlined. First attempts to perform manual tasks using robots are described. A critical analysis of the delayed development and introduction of robots in anesthesia is delivered.

  1. A Collaborative Approach for Surface Inspection Using Aerial Robots and Computer Vision

    Directory of Open Access Journals (Sweden)

    Martin Molina

    2018-03-01

    Full Text Available Aerial robots with cameras on board can be used in surface inspection to observe areas that are difficult to reach by other means. In this type of problem, it is desirable for aerial robots to have a high degree of autonomy. A way to provide more autonomy would be to use computer vision techniques to automatically detect anomalies on the surface. However, the performance of automated visual recognition methods is limited in uncontrolled environments, so that in practice it is not possible to perform a fully automatic inspection. This paper presents a solution for visual inspection that increases the degree of autonomy of aerial robots following a semi-automatic approach. The solution is based on human-robot collaboration in which the operator delegates tasks to the drone for exploration and visual recognition and the drone requests assistance in the presence of uncertainty. We validate this proposal with the development of an experimental robotic system using the software framework Aerostack. The paper describes technical challenges that we had to solve to develop such a system and the impact on this solution on the degree of autonomy to detect anomalies on the surface.

  2. Endoscopic vision-based tracking of multiple surgical instruments during robot-assisted surgery.

    Science.gov (United States)

    Ryu, Jiwon; Choi, Jaesoon; Kim, Hee Chan

    2013-01-01

    Robot-assisted minimally invasive surgery is effective for operations in limited space. Enhancing safety based on automatic tracking of surgical instrument position to prevent inadvertent harmful events such as tissue perforation or instrument collisions could be a meaningful augmentation to current robotic surgical systems. A vision-based instrument tracking scheme as a core algorithm to implement such functions was developed in this study. An automatic tracking scheme is proposed as a chain of computer vision techniques, including classification of metallic properties using k-means clustering and instrument movement tracking using similarity measures, Euclidean distance calculations, and a Kalman filter algorithm. The implemented system showed satisfactory performance in tests using actual robot-assisted surgery videos. Trajectory comparisons of automatically detected data and ground truth data obtained by manually locating the center of mass of each instrument were used to quantitatively validate the system. Instruments and collisions could be well tracked through the proposed methods. The developed collision warning system could provide valuable information to clinicians for safer procedures. © 2012, Copyright the Authors. Artificial Organs © 2012, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  3. Toward The Robot Eye: Isomorphic Representation For Machine Vision

    Science.gov (United States)

    Schenker, Paul S.

    1981-10-01

    This paper surveys some issues confronting the conception of models for general purpose vision systems. We draw parallels to requirements of human performance under visual transformations naturally occurring in the ecological environment. We argue that successful real world vision systems require a strong component of analogical reasoning. We propose a course of investigation into appropriate models, and illustrate some of these proposals by a simple example. Our study emphasizes the potential importance of isomorphic representations - models of image and scene which embed a metric of their respective spaces, and whose topological structure facilitates identification of scene descriptors that are invariant under viewing transformations.

  4. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  5. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    International Nuclear Information System (INIS)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin

    2014-01-01

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  6. Vision-Based Robot Following Using PID Control

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Pati

    2017-06-01

    Full Text Available Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to follow the master robot. For the master robot, a Bluetooth module receives the user’s command from an android application which is processed by the master robot’s controller, which is used to move the robot. The follower robot receives the image from the Kinect sensor mounted on it and recognizes the master robot. The follower robot identifies the x, y positions by employing the camera and the depth by using the Kinect depth sensor. By identifying the x, y, and z locations of the master robot, the follower robot finds the angle and distance between the master and follower robot, which is given as the error term of a PID controller. Using this, the follower robot follows the master robot. A PID controller is based on feedback and tries to minimize the error. Experiments are conducted for two indigenously developed robots; one depicting a humanoid and the other a small mobile robot. It was observed that the follower robot was easily able to follow the master robot using well-tuned PID parameters.

  7. Smart mobile robot system for rubbish collection

    Science.gov (United States)

    Ali, Mohammed A. H.; Sien Siang, Tan

    2018-03-01

    This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.

  8. Robotic systems in spine surgery.

    Science.gov (United States)

    Onen, Mehmet Resid; Naderi, Sait

    2014-01-01

    Surgical robotic systems have been available for almost twenty years. The first surgical robotic systems were designed as supportive systems for laparoscopic approaches in general surgery (the first procedure was a cholecystectomy in 1987). The da Vinci Robotic System is the most common system used for robotic surgery today. This system is widely used in urology, gynecology and other surgical disciplines, and recently there have been initial reports of its use in spine surgery, for transoral access and anterior approaches for lumbar inter-body fusion interventions. SpineAssist, which is widely used in spine surgery, and Renaissance Robotic Systems, which are considered the next generation of robotic systems, are now FDA approved. These robotic systems are designed for use as guidance systems in spine instrumentation, cement augmentations and biopsies. The aim is to increase surgical accuracy while reducing the intra-operative exposure to harmful radiation to the patient and operating team personnel during the intervention. We offer a review of the published literature related to the use of robotic systems in spine surgery and provide information on using robotic systems.

  9. Declarative Rule-based Safety for Robotic Perception Systems

    DEFF Research Database (Denmark)

    Mogensen, Johann Thor Ingibergsson; Kraft, Dirk; Schultz, Ulrik Pagh

    2017-01-01

    Mobile robots are used across many domains from personal care to agriculture. Working in dynamic open-ended environments puts high constraints on the robot perception system, which is critical for the safety of the system as a whole. To achieve the required safety levels the perception system needs...... to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints...

  10. Development of a Vision-Based Robotic Follower Vehicle

    Science.gov (United States)

    2009-02-01

    resultant blob . . . . . . . . . . 14 Figure 13: A sample image and the recognized keypoints found using the SIFT algorithm...Figure 12: An example of a spherical target and the resultant blob (taken from [66]). To track multi-coloured objects, rather than using just one...International Journal of Advanced Robotic Systems, 2(3), 245–250. [37] Zhou, J. and Clark, C. (2006), Autonomous fish tracking by ROV using Monocular

  11. Mergeable nervous systems for robots.

    Science.gov (United States)

    Mathews, Nithin; Christensen, Anders Lyhne; O'Grady, Rehan; Mondada, Francesco; Dorigo, Marco

    2017-09-12

    Robots have the potential to display a higher degree of lifetime morphological adaptation than natural organisms. By adopting a modular approach, robots with different capabilities, shapes, and sizes could, in theory, construct and reconfigure themselves as required. However, current modular robots have only been able to display a limited range of hardwired behaviors because they rely solely on distributed control. Here, we present robots whose bodies and control systems can merge to form entirely new robots that retain full sensorimotor control. Our control paradigm enables robots to exhibit properties that go beyond those of any existing machine or of any biological organism: the robots we present can merge to form larger bodies with a single centralized controller, split into separate bodies with independent controllers, and self-heal by removing or replacing malfunctioning body parts. This work takes us closer to robots that can autonomously change their size, form and function.Robots that can self-assemble into different morphologies are desired to perform tasks that require different physical capabilities. Mathews et al. design robots whose bodies and control systems can merge and split to form new robots that retain full sensorimotor control and act as a single entity.

  12. A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors

    Directory of Open Access Journals (Sweden)

    L. Payá

    2017-01-01

    Full Text Available Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.

  13. Active vision via extremum seeking for robots in unstructured environments : Applications in object recognition and manipulation

    NARCIS (Netherlands)

    Calli, B.; Caarls, W.; Wisse, M.; Jonker, P.P.

    2018-01-01

    In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an

  14. Application of robotics to distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Ramsbottom, W

    1986-06-01

    Robotic technology has been recognized as having potential application in lifeline maintenance and repair. A study was conducted to investigate the feasibility of utilizing robotics for this purpose, and to prepare a general design of appropriate equipment. Four lifeline tasks were selected as representative of the majority of work. Based on a detailed task decomposition, subtasks were rated on amenability to robot completion. All tasks are feasible, but in some cases special tooling is required. Based on today's robotics, it is concluded that a force reflecting master/slave telemanipulator, augmented by automatic robot tasks under a supervisory control system, provides the optimal approach. No commercially available products are currently adequate for lifeline work. A general design of the telemanipulator, which has been named the SKYARM has been developed, addressing all subsystems such as the manipulator, video, control power and insulation. The baseline system is attainable using today's technology. Improved performance and lower cost will be achieved through developments in artificial intelligence, machine vision, supervisory control and dielectrics. Immediate benefits to utilities include increased safety, better service and savings on a subset of maintenance tasks. In 3-5 years, the SKYARM will prove cost effective as a general purpose lifeline tool. 7 refs., 26 figs., 3 tabs.

  15. Assistive and Rehabilitation Robotic System

    Directory of Open Access Journals (Sweden)

    Adrian Abrudean

    2015-06-01

    Full Text Available A short introduction concerning the content of Assistive Technology and Rehabilitation Engineering is followed by a study of robotic systems which combine two or more assistive functions. Based on biomechanical aspects, a complex robotic system is presented, starting with the study of functionality and ending with the practical aspects of the prototype development.

  16. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  17. Fiber optic coherent laser radar 3D vision system

    International Nuclear Information System (INIS)

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-01-01

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  18. State of the art of robotic surgery related to vision: brain and eye applications of newly available devices

    Science.gov (United States)

    Nuzzi, Raffaele

    2018-01-01

    Background Robot-assisted surgery has revolutionized many surgical subspecialties, mainly where procedures have to be performed in confined, difficult to visualize spaces. Despite advances in general surgery and neurosurgery, in vivo application of robotics to ocular surgery is still in its infancy, owing to the particular complexities of microsurgery. The use of robotic assistance and feedback guidance on surgical maneuvers could improve the technical performance of expert surgeons during the initial phase of the learning curve. Evidence acquisition We analyzed the advantages and disadvantages of surgical robots, as well as the present applications and future outlook of robotics in neurosurgery in brain areas related to vision and ophthalmology. Discussion Limitations to robotic assistance remain, that need to be overcome before it can be more widely applied in ocular surgery. Conclusion There is heightened interest in studies documenting computerized systems that filter out hand tremor and optimize speed of movement, control of force, and direction and range of movement. Further research is still needed to validate robot-assisted procedures. PMID:29440943

  19. Building Artificial Vision Systems with Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    LeCun, Yann [New York University

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  20. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  1. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  2. Robot and Human Surface Operations on Solar System Bodies

    Science.gov (United States)

    Weisbin, C. R.; Easter, R.; Rodriguez, G.

    2001-01-01

    This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.

  3. Motion based segmentation for robot vision using adapted EM algorithm

    NARCIS (Netherlands)

    Zhao, Wei; Roos, Nico

    2016-01-01

    Robots operate in a dynamic world in which objects are often moving. The movement of objects may help the robot to segment the objects from the background. The result of the segmentation can subsequently be used to identify the objects. This paper investigates the possibility of segmenting objects

  4. Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard

    The field of collaborative industrial robots is currently developing fast both in the industry and in the scientific community. Companies such as Rethink Robotics and Universal Robots are redefining the concept of an industrial robot and entire new markets and use cases are becoming relevant for ...

  5. Robot vision language RVL/V: An integration scheme of visual processing and manipulator control

    International Nuclear Information System (INIS)

    Matsushita, T.; Sato, T.; Hirai, S.

    1984-01-01

    RVL/V is a robot vision language designed to write a program for visual processing and manipulator control of a hand-eye system. This paper describes the design of RVL/V and the current implementation of the system. Visual processing is performed on one-dimensional range data of the object surface. Model-based instructions execute object detection, measurement and view control. The hierarchy of visual data and processing is introduced to give RVL/V generality. A new scheme to integrate visual information and manipulator control is proposed. The effectiveness of the model-based visual processing scheme based on profile data is demonstrated by a hand-eye experiment

  6. Social Constraints on Animate Vision

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2000-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  7. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  8. A Haptic Guided Robotic System for Endoscope Positioning and Holding.

    Science.gov (United States)

    Cabuk, Burak; Ceylan, Savas; Anik, Ihsan; Tugasaygi, Mehtap; Kizir, Selcuk

    2015-01-01

    To determine the feasibility, advantages, and disadvantages of using a robot for holding and maneuvering the endoscope in transnasal transsphenoidal surgery. The system used in this study was a Stewart Platform based robotic system that was developed by Kocaeli University Department of Mechatronics Engineering for positioning and holding of endoscope. After the first use on an artificial head model, the system was used on six fresh postmortem bodies that were provided by the Morgue Specialization Department of the Forensic Medicine Institute (Istanbul, Turkey). The setup required for robotic system was easy, the time for registration procedure and setup of the robot takes 15 minutes. The resistance was felt on haptic arm in case of contact or friction with adjacent tissues. The adaptation process was shorter with the mouse to manipulate the endoscope. The endoscopic transsphenoidal approach was achieved with the robotic system. The endoscope was guided to the sphenoid ostium with the help of the robotic arm. This robotic system can be used in endoscopic transsphenoidal surgery as an endoscope positioner and holder. The robot is able to change the position easily with the help of an assistant and prevents tremor, and provides a better field of vision for work.

  9. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  10. Mobility Systems For Robotic Vehicles

    Science.gov (United States)

    Chun, Wendell

    1987-02-01

    The majority of existing robotic systems can be decomposed into five distinct subsystems: locomotion, control/man-machine interface (MMI), sensors, power source, and manipulator. When designing robotic vehicles, there are two main requirements: first, to design for the environment and second, for the task. The environment can be correlated with known missions. This can be seen by analyzing existing mobile robots. Ground mobile systems are generally wheeled, tracked, or legged. More recently, underwater vehicles have gained greater attention. For example, Jason Jr. made history by surveying the sunken luxury liner, the Titanic. The next big surge of robotic vehicles will be in space. This will evolve as a result of NASA's commitment to the Space Station. The foreseeable robots will interface with current systems as well as standalone, free-flying systems. A space robotic vehicle is similar to its underwater counterpart with very few differences. Their commonality includes missions and degrees-of-freedom. The issues of stability and communication are inherent in both systems and environment.

  11. Visual Detection and Tracking System for a Spherical Amphibious Robot.

    Science.gov (United States)

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-04-15

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.

  12. Visual Detection and Tracking System for a Spherical Amphibious Robot

    Science.gov (United States)

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-01-01

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134

  13. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  14. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    Science.gov (United States)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  15. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  16. Automated rose cutting in greenhouses with 3D vision and robotics : analysis of 3D vision techniques for stem detection

    NARCIS (Netherlands)

    Noordam, J.C.; Hemming, J.; Heerde, van C.J.E.; Golbach, F.B.T.F.; Soest, van R.; Wekking, E.

    2005-01-01

    The reduction of labour cost is the major motivation to develop a system for robot harvesting of roses in greenhouses that at least can compete with manual harvesting. Due to overlapping leaves, one of the most complicated tasks in robotic rose cutting is to locate the stem and trace the stem down

  17. Neuromorphic vision sensors and preprocessors in system applications

    Science.gov (United States)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  18. A Vision-Based Approach for Estimating Contact Forces: Applications to Robot-Assisted Surgery

    Directory of Open Access Journals (Sweden)

    C. W. Kennedy

    2005-01-01

    Full Text Available The primary goal of this paper is to provide force feedback to the user using vision-based techniques. The approach presented in this paper can be used to provide force feedback to the surgeon for robot-assisted procedures. As proof of concept, we have developed a linear elastic finite element model (FEM of a rubber membrane whereby the nodal displacements of the membrane points are measured using vision. These nodal displacements are the input into our finite element model. In the first experiment, we track the deformation of the membrane in real-time through stereovision and compare it with the actual deformation computed through forward kinematics of the robot arm. On the basis of accurate deformation estimation through vision, we test the physical model of a membrane developed through finite element techniques. The FEM model accurately reflects the interaction forces on the user console when the interaction forces of the robot arm with the membrane are compared with those experienced by the surgeon on the console through the force feedback device. In the second experiment, the PHANToM haptic interface device is used to control the Mitsubishi PA-10 robot arm and interact with the membrane in real-time. Image data obtained through vision of the deformation of the membrane is used as the displacement input for the FEM model to compute the local interaction forces which are then displayed on the user console for providing force feedback and hence closing the loop.

  19. CRV 2008: Fifth Canadian Conference on Computerand Robot Vision, Windsor, ON, Canada, May 2008

    DEFF Research Database (Denmark)

    Fihl, Preben

    This technical report will cover the participation in the fifth Canadian Conference on Computer and Robot Vision in May 2008. The report will give a concise description of the topics presented at the conference, focusing on the work related to the HERMES project and human motion and action...

  20. Novel robotic systems and future directions

    Directory of Open Access Journals (Sweden)

    Ki Don Chang

    2018-01-01

    Full Text Available Robot-assistance is increasingly used in surgical practice. We performed a nonsystematic literature review using PubMed/MEDLINE and Google for robotic surgical systems and compiled information on their current status. We also used this information to predict future about the direction of robotic systems based on various robotic systems currently being developed. Currently, various modifications are being made in the consoles, robotic arms, cameras, handles and instruments, and other specific functions (haptic feedback and eye tracking that make up the robotic surgery system. In addition, research for automated surgery is actively being carried out. The development of future robots will be directed to decrease the number of incisions and improve precision. With the advent of artificial intelligence, a more practical form of robotic surgery system can be introduced and will ultimately lead to the development of automated robotic surgery system.

  1. Design, implementation and testing of master slave robotic surgical system

    International Nuclear Information System (INIS)

    Ali, S.A.

    2015-01-01

    The autonomous manipulation of the medical robotics is needed to draw up a complete surgical plan in development. The autonomy of the robot comes from the fact that once the plan is drawn up off-line, it is the servo loops, and only these, that control the actions of the robot online, based on instantaneous control signals and measurements provided by the vision or force sensors. Using only these autonomous techniques in medical and surgical robotics remain relatively limited for two main reasons: Predicting complexity of the gestures, and human Safety. Therefore, Modern research in haptic force feedback in medical robotics is aimed to develop medical robots capable of performing remotely, what a surgeon does by himself. These medical robots are supposed to work exactly in the manner that a surgeon does in daily routine. In this paper the master slave tele-robotic system is designed and implemented with accuracy and stability by using 6DOF (Six Degree of Freedom) haptic force feedback devices. The master slave control strategy, haptic devices integration, application software designing using Visual C++ and experimental setup are considered. Finally, results are presented the stability, accuracy and repeatability of the system. (author)

  2. Design, Implementation and Testing of Master Slave Robotic Surgical System

    Directory of Open Access Journals (Sweden)

    Syed Amjad Ali

    2015-01-01

    Full Text Available The autonomous manipulation of the medical robotics is needed to draw up a complete surgical plan in development. The autonomy of the robot comes from the fact that once the plan is drawn up off-line, it is the servo loops, and only these, that control the actions of the robot online, based on instantaneous control signals and measurements provided by the vision or force sensors. Using only these autonomous techniques in medical and surgical robotics remain relatively limited for two main reasons: Predicting complexity of the gestures, and human Safety. Therefore, Modern research in haptic force feedback in medical robotics is aimed to develop medical robots capable of performing remotely, what a surgeon does by himself. These medical robots are supposed to work exactly in the manner that a surgeon does in daily routine. In this paper the master slave tele-robotic system is designed and implemented with accuracy and stability by using 6DOF (Six Degree of Freedom haptic force feedback devices. The master slave control strategy, haptic devices integration, application software designing using Visual C++ and experimental setup are considered. Finally, results are presented the stability, accuracy and repeatability of the system

  3. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  4. The Human-Robot Interaction Operating System

    Science.gov (United States)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  5. Vision guided robot bin picking of cylindrical objects

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg; Dyhr-Nielsen, Carsten

    1997-01-01

    In order to achieve increased flexibility on robotic production lines an investigation of the rovbot bin-picking problem is presented. In the paper, the limitations related to previous attempts to solve the problem are pointed uot and a set of innovative methods are presented. The main elements...

  6. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  7. Design and Development of Vision Based Blockage Clearance Robot for Sewer Pipes

    Directory of Open Access Journals (Sweden)

    Krishna Prasad Nesaian

    2012-03-01

    Full Text Available Robotic technology is one of the advanced technologies, which is capable of completing tasks at situations where humans are unable to reach, see or survive. The underground sewer pipelines are the major tools for the transportation of effluent water. A lot of troubles caused by blockage in sewer pipe will lead to overflow of effluent water, sanitation problems. So robotic vehicle that is capable of traveling at underneath effluent water determining blockage using ultrasonic sensors and clearing by means of drilling mechanism is done. In addition to that wireless camera is fixed which acts as a robot vision by which we can monitor video and capture images using MATLAB tool. Thus in this project a prototype model of underground sewer pipe blockage clearance robot with drilling type will be developed

  8. Multibody system dynamics, robotics and control

    CERN Document Server

    Gerstmayr, Johannes

    2013-01-01

    The volume contains 19 contributions by international experts in the field of multibody system dynamics, robotics and control. The book aims to bridge the gap between the modeling of mechanical systems by means of multibody dynamics formulations and robotics. In the classical approach, a multibody dynamics model contains a very high level of detail, however, the application of such models to robotics or control is usually limited. The papers aim to connect the different scientific communities in multibody dynamics, robotics and control. Main topics are flexible multibody systems, humanoid robots, elastic robots, nonlinear control, optimal path planning, and identification.

  9. Medical Robots: Current Systems and Research Directions

    Directory of Open Access Journals (Sweden)

    Ryan A. Beasley

    2012-01-01

    Full Text Available First used medically in 1985, robots now make an impact in laparoscopy, neurosurgery, orthopedic surgery, emergency response, and various other medical disciplines. This paper provides a review of medical robot history and surveys the capabilities of current medical robot systems, primarily focusing on commercially available systems while covering a few prominent research projects. By examining robotic systems across time and disciplines, trends are discernible that imply future capabilities of medical robots, for example, increased usage of intraoperative images, improved robot arm design, and haptic feedback to guide the surgeon.

  10. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  11. Advanced mechanics in robotic systems

    CERN Document Server

    Nava Rodríguez, Nestor Eduardo

    2011-01-01

    Illustrates original and ambitious mechanical designs and techniques for the development of new robot prototypes Includes numerous figures, tables and flow charts Discusses relevant applications in robotics fields such as humanoid robots, robotic hands, mobile robots, parallel manipulators and human-centred robots

  12. Machine vision for a selective broccoli harvesting robot

    NARCIS (Netherlands)

    Blok, Pieter M.; Barth, Ruud; Berg, Van Den Wim

    2016-01-01

    The selective hand-harvest of fresh market broccoli is labor-intensive and comprises about 35% of the total production costs. This research was conducted to determine whether machine vision can be used to detect broccoli heads, as a first step in the development of a fully autonomous selective

  13. Estimation of visual maps with a robot network equipped with vision sensors.

    Science.gov (United States)

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  14. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    Directory of Open Access Journals (Sweden)

    Arturo Gil

    2010-05-01

    Full Text Available In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  15. An FPGA-Based Omnidirectional Vision Sensor for Motion Detection on Mobile Robots

    Directory of Open Access Journals (Sweden)

    Jones Y. Mori

    2012-01-01

    Full Text Available This work presents the development of an integrated hardware/software sensor system for moving object detection and distance calculation, based on background subtraction algorithm. The sensor comprises a catadioptric system composed by a camera and a convex mirror that reflects the environment to the camera from all directions, obtaining a panoramic view. The sensor is used as an omnidirectional vision system, allowing for localization and navigation tasks of mobile robots. Several image processing operations such as filtering, segmentation and morphology have been included in the processing architecture. For achieving distance measurement, an algorithm to determine the center of mass of a detected object was implemented. The overall architecture has been mapped onto a commercial low-cost FPGA device, using a hardware/software co-design approach, which comprises a Nios II embedded microprocessor and specific image processing blocks, which have been implemented in hardware. The background subtraction algorithm was also used to calibrate the system, allowing for accurate results. Synthesis results show that the system can achieve a throughput of 26.6 processed frames per second and the performance analysis pointed out that the overall architecture achieves a speedup factor of 13.78 in comparison with a PC-based solution running on the real-time operating system xPC Target.

  16. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide

    Directory of Open Access Journals (Sweden)

    Gonzalo Pajares

    2016-11-01

    Full Text Available Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a spectral bands (visible and infrared; (b imaging sensors and optical systems (including intrinsic parameters and (c geometric visual system arrangement (considering extrinsic parameters and stereovision systems. A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management project for effective weed control in maize fields (wide-rows crops, funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics.

  17. Robotics/Automated Systems Technicians.

    Science.gov (United States)

    Doty, Charles R.

    Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…

  18. Vision-based control of robotic arm with 6 degrees of freedom

    OpenAIRE

    Versleegers, Wim

    2014-01-01

    This paper studies the procedure to program a vertically articulated robot with six degrees of freedom, the Mitsubishi Melfa RV-2SD, with Matlab. A major drawback of the programming software provided by Mitsubishi is that it barely allows the use of vision-based programming. The amount of useable cameras is limited and moreover, the cameras are very expensive. Using Matlab, these limitations could be overcome. However there is no direct way to control the robot with Matlab. The goal of this p...

  19. Automatic control system generation for robot design validation

    Science.gov (United States)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  20. Teleoperated robotic sorting system

    Science.gov (United States)

    Roos, Charles E.; Sommer, Edward J.; Parrish, Robert H.; Russell, James R.

    2000-01-01

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  1. VisGraB: A Benchmark for Vision-Based Grasping. Paladyn Journal of Behavioral Robotics

    DEFF Research Database (Denmark)

    Kootstra, Gert; Popovic, Mila; Jørgensen, Jimmy Alison

    2012-01-01

    that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision......We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different...

  2. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  3. A bio-inspired apposition compound eye machine vision sensor system

    International Nuclear Information System (INIS)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-01-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  4. A Robust Vision Module for Humanoid Robotic Ping-Pong Game

    Directory of Open Access Journals (Sweden)

    Xiaopeng Chen

    2015-04-01

    Full Text Available Developing a vision module for a humanoid ping-pong game is challenging due to the spin and the non-linear rebound of the ping-pong ball. In this paper, we present a robust predictive vision module to overcome these problems. The hardware of the vision module is composed of two stereo camera pairs with each pair detecting the 3D positions of the ball on one half of the ping-pong table. The software of the vision module divides the trajectory of the ball into four parts and uses the perceived trajectory in the first part to predict the other parts. In particular, the software of the vision module uses an aerodynamic model to predict the trajectories of the ball in the air and uses a novel non-linear rebound model to predict the change of the ball's motion during rebound. The average prediction error of our vision module at the ball returning point is less than 50 mm - a value small enough for standard sized ping-pong rackets. Its average processing speed is 120fps. The precision and efficiency of our vision module enables two humanoid robots to play ping-pong continuously for more than 200 rounds.

  5. Development of an advanced intelligent robot navigation system

    International Nuclear Information System (INIS)

    Hai Quan Dai; Dalton, G.R.; Tulenko, J.; Crane, C.C. III

    1992-01-01

    As part of the US Department of Energy's Robotics for Advanced Reactors Project, the authors are in the process of assembling an advanced intelligent robotic navigation and control system based on previous work performed on this project in the areas of computer control, database access, graphical interfaces, shared data and computations, computer vision for positions determination, and sonar-based computer navigation systems. The system will feature three levels of goals: (1) high-level system for management of lower level functions to achieve specific functional goals; (2) intermediate level of goals such as position determination, obstacle avoidance, and discovering unexpected objects; and (3) other supplementary low-level functions such as reading and recording sonar or video camera data. In its current phase, the Cybermotion K2A mobile robot is not equipped with an onboard computer system, which will be included in the final phase. By that time, the onboard system will play important roles in vision processing and in robotic control communication

  6. A Miniature Robot for Retraction Tasks under Vision Assistance in Minimally Invasive Surgery

    Directory of Open Access Journals (Sweden)

    Giuseppe Tortora

    2014-03-01

    Full Text Available Minimally Invasive Surgery (MIS is one of the main aims of modern medicine. It enables surgery to be performed with a lower number and severity of incisions. Medical robots have been developed worldwide to offer a robotic alternative to traditional medical procedures. New approaches aimed at a substantial decrease of visible scars have been explored, such as Natural Orifice Transluminal Endoscopic Surgery (NOTES. Simple surgical tasks such as the retraction of an organ can be a challenge when performed from narrow access ports. For this reason, there is a continuous need to develop new robotic tools for performing dedicated tasks. This article illustrates the design and testing of a new robotic tool for retraction tasks under vision assistance for NOTES. The retraction robots integrate brushless motors to enable additional degrees of freedom to that provided by magnetic anchoring, thus improving the dexterity of the overall platform. The retraction robot can be easily controlled to reach the target organ and apply a retraction force of up to 1.53 N. Additional degrees of freedom can be used for smooth manipulation and grasping of the organ.

  7. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    Directory of Open Access Journals (Sweden)

    Giuseppe Airò Farulla

    2016-02-01

    Full Text Available Vision-based Pose Estimation (VPE represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.

  8. Human Robotic Systems (HRS): Robotic ISRU Acquisition Element

    Data.gov (United States)

    National Aeronautics and Space Administration — During 2014, the Robotic ISRU Resource Acquisition project element will develop two technologies:Exploration Ground Data Systems (xGDS)Sample Acquisition on...

  9. Using High-Level RTOS Models for HW/SW Embedded Architecture Exploration: Case Study on Mobile Robotic Vision

    Directory of Open Access Journals (Sweden)

    Verdier François

    2008-01-01

    Full Text Available Abstract We are interested in the design of a system-on-chip implementing the vision system of a mobile robot. Following a biologically inspired approach, this vision architecture belongs to a larger sensorimotor loop. This regulation loop both creates and exploits dynamics properties to achieve a wide variety of target tracking and navigation objectives. Such a system is representative of numerous flexible and dynamic applications which are more and more encountered in embedded systems. In order to deal with all of the dynamic aspects of these applications, it appears necessary to embed a dedicated real-time operating system on the chip. The presence of this on-chip custom executive layer constitutes a major scientific obstacle in the traditional hardware and software design flows. Classical exploration and simulation tools are particularly inappropriate in this case. We detail in this paper the specific mechanisms necessary to build a high-level model of an embedded custom operating system able to manage such a real-time but flexible application. We also describe our executable RTOS model written in SystemC allowing an early simulation of our application on top of its specific scheduling layer. Based on this model, a methodology is discussed and results are given on the exploration and validation of a distributed platform adapted to this vision system.

  10. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    Science.gov (United States)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  11. Functional Modeling for Monitoring of Robotic System

    DEFF Research Database (Denmark)

    Wu, Haiyan; Bateman, Rikke R.; Zhang, Xinxin

    2018-01-01

    With the expansion of robotic applications in the industrial domain, it is important that the robots can execute their tasks in a safe and reliable way. A monitoring system can be implemented to ensure the detection of abnormal situations of the robots and report the abnormality to their human su...

  12. In Pipe Robot with Hybrid Locomotion System

    Directory of Open Access Journals (Sweden)

    Cristian Miclauş

    2015-06-01

    Full Text Available The first part of the paper covers aspects concerning in pipe robots and their components, such as hybrid locomotion systems and the adapting mechanisms used. The second part describes the inspection robot that was developed, which combines tracked and wheeled locomotion (hybrid locomotion. The end of the paper presents the advantages and disadvantages of the proposed robot.

  13. High precision detector robot arm system

    Science.gov (United States)

    Shu, Deming; Chu, Yong

    2017-01-31

    A method and high precision robot arm system are provided, for example, for X-ray nanodiffraction with an X-ray nanoprobe. The robot arm system includes duo-vertical-stages and a kinematic linkage system. A two-dimensional (2D) vertical plane ultra-precision robot arm supporting an X-ray detector provides positioning and manipulating of the X-ray detector. A vertical support for the 2D vertical plane robot arm includes spaced apart rails respectively engaging a first bearing structure and a second bearing structure carried by the 2D vertical plane robot arm.

  14. Modular Track System For Positioning Mobile Robots

    Science.gov (United States)

    Miller, Jeff

    1995-01-01

    Conceptual system for positioning mobile robotic manipulators on large main structure includes modular tracks and ancillary structures assembled easily along with main structure. System, called "tracked robotic location system" (TROLS), originally intended for application to platforms in outer space, but TROLS concept might also prove useful on Earth; for example, to position robots in factories and warehouses. T-cross-section rail keeps mobile robot on track. Bar codes mark locations along track. Each robot equipped with bar-code-recognizing circuitry so it quickly finds way to assigned location.

  15. Robotic radiation survey and analysis system for radiation waste casks

    International Nuclear Information System (INIS)

    Thunborg, S.

    1987-01-01

    Sandia National Laboratories (SNL) and the Hanford Engineering Development Laboratories have been involved in the development of remote systems technology concepts for handling defense high-level waste (DHLW) shipping casks at the waste repository. This effort was demonstrated the feasibility of using this technology for handling DHLW casks. These investigations have also shown that cask design can have a major effect on the feasibility of remote cask handling. Consequently, SNL has initiated a program to determine cask features necessary for robotic remote handling at the waste repository. The initial cask handling task selected for detailed investigation was the robotic radiation survey and analysis (RRSAS) task. In addition to determining the design features required for robotic cask handling, the RRSAS project contributes to the definition of techniques for random selection of swipe locations, the definition of robotic swipe parameters, force control techniques for robotic swipes, machine vision techniques for the location of objects in 3-D, repository robotic systems requirements, and repository data management system needs

  16. Visual guidance of a pig evisceration robot using neural networks

    DEFF Research Database (Denmark)

    Christensen, S.S.; Andersen, A.W.; Jørgensen, T.M.

    1996-01-01

    The application of a RAM-based neural network to robot vision is demonstrated for the guidance of a pig evisceration robot. Tests of the combined robot-vision system have been performed at an abattoir. The vision system locates a set of feature points on a pig carcass and transmits the 3D coordin...

  17. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  18. Supervisory control for a complex robotic system

    International Nuclear Information System (INIS)

    Miller, D.J.

    1988-01-01

    The Robotic Radiation Survey and Analysis System investigates the use of advanced robotic technology for performing remote radiation surveys on nuclear waste shipping casks. Robotic systems have the potential for reducing personnel exposure to radiation and providing fast reliable throughput at future repository sites. A primary technology issue is the integrated control of distributed specialized hardware through a modular supervisory software system. Automated programming of robot trajectories based upon mathematical models of the cask and robot coupled with sensory feedback enables flexible operation of a commercial gantry robot with the reliability needed to perform autonomous operations in a hazardous environment. Complexity is managed using structured software engineering techniques resulting in the generation of reusable command primitives which contribute to a software parts catalog for a generalized robot programming language

  19. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  20. Cloud-Enhanced Robotic System for Smart City Crowd Control

    Directory of Open Access Journals (Sweden)

    Akhlaqur Rahman

    2016-12-01

    Full Text Available Cloud robotics in smart cities is an emerging paradigm that enables autonomous robotic agents to communicate and collaborate with a cloud computing infrastructure. It complements the Internet of Things (IoT by creating an expanded network where robots offload data-intensive computation to the ubiquitous cloud to ensure quality of service (QoS. However, offloading for robots is significantly complex due to their unique characteristics of mobility, skill-learning, data collection, and decision-making capabilities. In this paper, a generic cloud robotics framework is proposed to realize smart city vision while taking into consideration its various complexities. Specifically, we present an integrated framework for a crowd control system where cloud-enhanced robots are deployed to perform necessary tasks. The task offloading is formulated as a constrained optimization problem capable of handling any task flow that can be characterized by a Direct Acyclic Graph (DAG. We consider two scenarios of minimizing energy and time, respectively, and develop a genetic algorithm (GA-based approach to identify the optimal task offloading decisions. The performance comparison with two benchmarks shows that our GA scheme achieves desired energy and time performance. We also show the adaptability of our algorithm by varying the values for bandwidth and movement. The results suggest their impact on offloading. Finally, we present a multi-task flow optimal path sequence problem that highlights how the robot can plan its task completion via movements that expend the minimum energy. This integrates path planning with offloading for robotics. To the best of our knowledge, this is the first attempt to evaluate cloud-based task offloading for a smart city crowd control system.

  1. Embedded vision equipment of industrial robot for inline detection of product errors by clustering–classification algorithms

    Directory of Open Access Journals (Sweden)

    Kamil Zidek

    2016-10-01

    Full Text Available The article deals with the design of embedded vision equipment of industrial robots for inline diagnosis of product error during manipulation process. The vision equipment can be attached to the end effector of robots or manipulators, and it provides an image snapshot of part surface before grasp, searches for error during manipulation, and separates products with error from the next operation of manufacturing. The new approach is a methodology based on machine teaching for the automated identification, localization, and diagnosis of systematic errors in products of high-volume production. To achieve this, we used two main data mining algorithms: clustering for accumulation of similar errors and classification methods for the prediction of any new error to proposed class. The presented methodology consists of three separate processing levels: image acquisition for fail parameterization, data clustering for categorizing errors to separate classes, and new pattern prediction with a proposed class model. We choose main representatives of clustering algorithms, for example, K-mean from quantization of vectors, fast library for approximate nearest neighbor from hierarchical clustering, and density-based spatial clustering of applications with noise from algorithm based on the density of the data. For machine learning, we selected six major algorithms of classification: support vector machines, normal Bayesian classifier, K-nearest neighbor, gradient boosted trees, random trees, and neural networks. The selected algorithms were compared for speed and reliability and tested on two platforms: desktop-based computer system and embedded system based on System on Chip (SoC with vision equipment.

  2. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  3. Health system vision of iran in 2025.

    Science.gov (United States)

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  4. Vector disparity sensor with vergence control for active vision systems.

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  5. Robotics

    International Nuclear Information System (INIS)

    Scheide, A.W.

    1983-01-01

    This article reviews some of the technical areas and history associated with robotics, provides information relative to the formation of a Robotics Industry Committee within the Industry Applications Society (IAS), and describes how all activities relating to robotics will be coordinated within the IEEE. Industrial robots are being used for material handling, processes such as coating and arc welding, and some mechanical and electronics assembly. An industrial robot is defined as a programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for a variety of tasks. The initial focus of the Robotics Industry Committee will be on the application of robotics systems to the various industries that are represented within the IAS

  6. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  7. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    Science.gov (United States)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  8. Task oriented evaluation system for maintenance robots

    International Nuclear Information System (INIS)

    Asame, Hajime; Endo, Isao; Kotosaka, Shin-ya; Takata, Shozo; Hiraoka, Hiroyuki; Kohda, Takehisa; Matsumoto, Akihiro; Yamagishi, Kiichiro.

    1994-01-01

    The adaptability evaluation of maintenance robots to autonomous plants has been discussed. In this paper, a new concept of autonomous plant with maintenance robots are introduced, and a framework of autonomous maintenance system is proposed. Then, task-oriented evaluation of robot arms is discussed for evaluating their adaptability to maintenance tasks, and a new criterion called operability is proposed for adaptability evaluation. The task-oriented evaluation system is implemented and applied to structural design of robot arms. Using genetic algorithm, an optimal structure adaptable to a pump disassembly task is obtained. (author)

  9. Robotic system for process sampling

    International Nuclear Information System (INIS)

    Dyches, G.M.

    1985-01-01

    A three-axis cartesian geometry robot for process sampling was developed at the Savannah River Laboratory (SRL) and implemented in one of the site radioisotope separations facilities. Use of the robot reduces personnel radiation exposure and contamination potential by routinely handling sample containers under operator control in a low-level radiation area. This robot represents the initial phase of a longer term development program to use robotics for further sample automation. Preliminary design of a second generation robot with additional capabilities is also described. 8 figs

  10. Coordinated robotic system for civil structural health monitoring

    Directory of Open Access Journals (Sweden)

    Qidwai Uvais

    2017-01-01

    Full Text Available With the recent advances in sensors, robotics, unmanned aerial vehicles, communication, and information technologies, it is now feasible to move towards the vision of ubiquitous cities, where virtually everything throughout the city is linked to an information system through technologies such as wireless networking and radio-frequency identification (RFID tags, to provide systematic and more efficient management of urban systems, including civil and mechanical infrastructure monitoring, to achieve the goal of resilient and sustainable societies. In this proposed system, unmanned aerial vehicle (UAVs is used to ascertain the coarse defect signature using panoramic imaging. This involves image stitching and registration so that a complete view of the surface is seen with reference to a common reference or origin point. Thereafter, crack verification and localization has been done using the magnetic flux leakage (MFL approach which has been performed with the help of a coordinated robotic system. In which the first robot is placed at the top of the structure whereas the second robot is equipped with the designed MFL sensory system. With the initial findings, the proposed system identifies and localize the crack in the given structure.

  11. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  12. High-resolution hyperspectral ground mapping for robotic vision

    Science.gov (United States)

    Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich

    2018-04-01

    Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.

  13. Robotic system for glovebox size reduction

    International Nuclear Information System (INIS)

    KWOK, KWAN S.; MCDONALD, MICHAEL J.

    2000-01-01

    The Intelligent Systems and Robotics Center (ISRC) at Sandia National Laboratories (SNL) is developing technologies for glovebox size reduction in the DOE nuclear complex. A study was performed for Kaiser-Hill (KH) at the Rocky Flats Environmental Technology Site (RFETS) on the available technologies for size reducing the glovebox lines that require size reduction in place. Currently, the baseline approach to these glovebox lines is manual operations using conventional mechanical cutting methods. The study has been completed and resulted in a concept of the robotic system for in-situ size reduction. The concept makes use of commercially available robots that are used in the automotive industry. The commercially available industrial robots provide high reliability and availability that are required for environmental remediation in the DOE complex. Additionally, the costs of commercial robots are about one-fourth that of the custom made robots for environmental remediation. The reason for the lower costs and the higher reliability is that there are thousands of commercial robots made annually, whereas there are only a few custom robots made for environmental remediation every year. This paper will describe the engineering analysis approach used in the design of the robotic system for glovebox size reduction

  14. Robot Skills for Transformable Manufacturing Systems

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Rath

    Efficient, transformable production systems need robots that are flexible and effortlessly repurposed or reconfigured. The present dissertation argues that this can be achieved through the implementation and use of general, object-centered robot skills. In this dissertation, we focus on the design...... autonomously, exactly when it is needed. It is the firm belief of this researcher that industrial robotics need to go in a direction towards what is outlined in this dissertation, both in academia and in the industry. In order for manufacturing companies to remain competitive, robotics is the definite way...

  15. [RESEARCH PROGRESS OF PERIPHERAL NERVE SURGERY ASSISTED BY Da Vinci ROBOTIC SYSTEM].

    Science.gov (United States)

    Shen, Jie; Song, Diyu; Wang, Xiaoyu; Wang, Changjiang; Zhang, Shuming

    2016-02-01

    To summarize the research progress of peripheral nerve surgery assisted by Da Vinci robotic system. The recent domestic and international articles about peripheral nerve surgery assisted by Da Vinci robotic system were reviewed and summarized. Compared with conventional microsurgery, peripheral nerve surgery assisted by Da Vinci robotic system has distinctive advantages, such as elimination of physiological tremors and three-dimensional high-resolution vision. It is possible to perform robot assisted limb nerve surgery using either the traditional brachial plexus approach or the mini-invasive approach. The development of Da Vinci robotic system has revealed new perspectives in peripheral nerve surgery. But it has still been at the initial stage, more basic and clinical researches are still needed.

  16. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    Science.gov (United States)

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    Science.gov (United States)

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  18. The First Korean Experience of Telemanipulative Robot-Assisted Laparoscopic Cholecystectomy Using the da Vinci System

    Science.gov (United States)

    Kang, Chang Moo; Chi, Hoon Sang; Hyeung, Woo Jin; Kim, Kyung Sik; Choi, Jin Sub; Kim, Byong Ro

    2007-01-01

    With the advancement of laparoscopic instruments and computer sciences, complex surgical procedures are expected to be safely performed by robot assisted telemanipulative laparoscopic surgery. The da Vinci system (Intuitive Surgical, Mountain View, CA, USA) became available at the many surgical fields. The wrist like movements of the instrument's tip, as well as 3-dimensional vision, could be expected to facilitate more complex laparoscopic procedure. Here, we present the first Korean experience of da Vinci robotic assisted laparoscopic cholecystectomy and discuss the introduction and perspectives of this robotic system. PMID:17594166

  19. Integrated Robotic systems for Humanitarian Demining

    Directory of Open Access Journals (Sweden)

    E. Colon

    2007-06-01

    Full Text Available This paper summarises the main results of 10 years of research and development in Humanitarian Demining. The Hudem project focuses on mine detection systems and aims at provided different solutions to support the mine detection operations. Robots using different kind of locomotion systems have been designed and tested on dummy minefields. In order to control these robots, software interfaces, control algorithms, visual positioning and terrain following systems have also been developed. Typical data acquisition results obtained during trial campaigns with robots and data acquisition systems are reported. Lessons learned during the project and future work conclude this paper.

  20. 3D printing of soft robotic systems

    Science.gov (United States)

    Wallin, T. J.; Pikul, J.; Shepherd, R. F.

    2018-06-01

    Soft robots are capable of mimicking the complex motion of animals. Soft robotic systems are defined by their compliance, which allows for continuous and often responsive localized deformation. These features make soft robots especially interesting for integration with human tissues, for example, the implementation of biomedical devices, and for robotic performance in harsh or uncertain environments, for example, exploration in confined spaces or locomotion on uneven terrain. Advances in soft materials and additive manufacturing technologies have enabled the design of soft robots with sophisticated capabilities, such as jumping, complex 3D movements, gripping and releasing. In this Review, we examine the essential soft material properties for different elements of soft robots, highlighting the most relevant polymer systems. Advantages and limitations of different additive manufacturing processes, including 3D printing, fused deposition modelling, direct ink writing, selective laser sintering, inkjet printing and stereolithography, are discussed, and the different techniques are investigated for their application in soft robotic fabrication. Finally, we explore integrated robotic systems and give an outlook for the future of the field and remaining challenges.

  1. Building and Programming a Smart Robotic System for Distinguishing Objects Based on their Shape and Colour

    Science.gov (United States)

    Sharari, T. M.

    2015-03-01

    This paper presents a robotic system designed for holding and placing objects based on their colour and shape. The presented robot is given a complete set of instructions of positions and orientation angles for each manipulation motion. The main feature in this paper is that the developed robot used a combination of vision and motion systems for holding and placing the work-objects, mounted on the flat work-plane, based on their shapes and colors. This combination improves the flexibility of manipulation which may help eliminate the use of some expensive manipulation tasks in a variety of industrial applications. The robotic system presented in this paper is designed as an educational robot that possesses the ability for holding-and-placing operations with limited load. To process the various instructions for holding and placing the work objects, a main control unit - Manipulation Control Unit (MCU) is used as well as a slave unit that performed the actual instructions from the MCU.

  2. Robotic guarded motion system and method

    Science.gov (United States)

    Bruemmer, David J.

    2010-02-23

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes instructions for repeating, on each iteration through an event timing loop, the acts of defining an event horizon, detecting a range to obstacles around the robot, and testing for an event horizon intrusion. Defining the event horizon includes determining a distance from the robot that is proportional to a current velocity of the robot and testing for the event horizon intrusion includes determining if any range to the obstacles is within the event horizon. Finally, on each iteration through the event timing loop, the method includes reducing the current velocity of the robot in proportion to a loop period of the event timing loop if the event horizon intrusion occurs.

  3. Augmented Robotics Dialog System for Enhancing Human–Robot Interaction

    Science.gov (United States)

    Alonso-Martín, Fernando; Castro-González, Aívaro; de Gorostiza Luengo, Francisco Javier Fernandez; Salichs, Miguel Ángel

    2015-01-01

    Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human–robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human–robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications. PMID:26151202

  4. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Directory of Open Access Journals (Sweden)

    Addythia Saphala

    2015-10-01

    Full Text Available Robotic Operation System (ROS is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR, which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM.

  5. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    Science.gov (United States)

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...techniques to determine the distances from each pixel to the camera. 14. SUBJECT TERMS unmanned undersea vehicles (UUVs), autonomous ... AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING Jake A. Jones Lieutenant Commander, United States Navy B.S

  6. A System for Complex Robotic Welding

    DEFF Research Database (Denmark)

    Madsen, Ole; Sørensen, Carsten Bro; Olsen, Birger

    2002-01-01

    This paper presents the architecture of a system for robotic welding of complex tasks. The system integrates off-line programming, control of redundant robots, collision-free motion planning and sensor-based control. An implementation for pipe structure welding made at Odense Steel Shipyard Ltd......., Denmark, demonstrates the system can be used for automatic welding of complex products in one-of-a-kind production....

  7. Integrated Robotic Systems for Humanitarian Demining

    OpenAIRE

    Colon, E.; Cubber, G. De; Ping, H.; Habumuremyi, J-C; Sahli, H.; Baudoin, Y.

    2007-01-01

    This paper summarises the main results of 10 years of research and development in Humanitarian Demining. The Hudem project focuses on mine detection systems and aims at provided different solutions to support the mine detection operations. Robots using different kind of locomotion systems have been designed and tested on dummy minefields. In order to control these robots, software interfaces, control algorithms, visual positioning and terrain following systems have also been developed. Typica...

  8. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  9. Knowledge based systems for intelligent robotics

    Science.gov (United States)

    Rajaram, N. S.

    1982-01-01

    It is pointed out that the construction of large space platforms, such as space stations, has to be carried out in the outer space environment. As it is extremely expensive to support human workers in space for large periods, the only feasible solution appears to be related to the development and deployment of highly capable robots for most of the tasks. Robots for space applications will have to possess characteristics which are very different from those needed by robots in industry. The present investigation is concerned with the needs of space robotics and the technologies which can be of assistance to meet these needs, giving particular attention to knowledge bases. 'Intelligent' robots are required for the solution of arising problems. The collection of facts and rules needed for accomplishing such solutions form the 'knowledge base' of the system.

  10. Missileborne Artificial Vision System (MAVIS)

    Science.gov (United States)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  11. Planetary rovers robotic exploration of the solar system

    CERN Document Server

    Ellery, Alex

    2016-01-01

    The increasing adoption of terrain mobility – planetary rovers – for the investigation of planetary surfaces emphasises their central importance in space exploration. This imposes a completely new set of technologies and methodologies to the design of such spacecraft – and planetary rovers are indeed, first and foremost, spacecraft. This introduces vehicle engineering, mechatronics, robotics, artificial intelligence and associated technologies to the spacecraft engineer’s repertoire of skills. Planetary Rovers is the only book that comprehensively covers these aspects of planetary rover engineering and more. The book: • discusses relevant planetary environments to rover missions, stressing the Moon and Mars; • includes a brief survey of previous rover missions; • covers rover mobility, traction and control systems; • stresses the importance of robotic vision in rovers for both navigation and science; • comprehensively covers autonomous navigation, path planning and multi-rover formations on ...

  12. ROBOSIM, a simulator for robotic systems

    Science.gov (United States)

    Hinman, Elaine M.; Fernandez, Ken; Cook, George E.

    1991-01-01

    ROBOSIM, a simulator for robotic systems, was developed by NASA to aid in the rapid prototyping of automation. ROBOSIM has allowed the development of improved robotic systems concepts for both earth-based and proposed on-orbit applications while significantly reducing development costs. In a cooperative effort with an area university, ROBOSIM was further developed for use in the classroom as a safe and cost-effective way of allowing students to study robotic systems. Students have used ROBOSIM to study existing robotic systems and systems which they have designed in the classroom. Since an advanced simulator/trainer of this type is beneficial not only to NASA projects and programs but industry and academia as well, NASA is in the process of developing this technology for wider public use. An update on the simulators's new application areas, the improvements made to the simulator's design, and current efforts to ensure the timely transfer of this technology are presented.

  13. Development of haptic system for surgical robot

    Science.gov (United States)

    Gang, Han Gyeol; Park, Jiong Min; Choi, Seung-Bok; Sohn, Jung Woo

    2017-04-01

    In this paper, a new type of haptic system for surgical robot application is proposed and its performances are evaluated experimentally. The proposed haptic system consists of an effective master device and a precision slave robot. The master device has 3-DOF rotational motion as same as human wrist motion. It has lightweight structure with a gyro sensor and three small-sized MR brakes for position measurement and repulsive torque generation, respectively. The slave robot has 3-DOF rotational motion using servomotors, five bar linkage and a torque sensor is used to measure resistive torque. It has been experimentally demonstrated that the proposed haptic system has good performances on tracking control of desired position and repulsive torque. It can be concluded that the proposed haptic system can be effectively applied to the surgical robot system in real field.

  14. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  15. Robotic system construction with mechatronic components inverted pendulum: humanoid robot

    Science.gov (United States)

    Sandru, Lucian Alexandru; Crainic, Marius Florin; Savu, Diana; Moldovan, Cristian; Dolga, Valer; Preitl, Stefan

    2017-03-01

    Mechatronics is a new methodology used to achieve an optimal design of an electromechanical product. This methodology is collection of practices, procedures and rules used by those who work in particular branch of knowledge or discipline. Education in mechatronics at the Polytechnic University Timisoara is organized on three levels: bachelor, master and PhD studies. These activities refer and to design the mechatronics systems. In this context the design, implementation and experimental study of a family of mechatronic demonstrator occupy an important place. In this paper, a variant for a mechatronic demonstrator based on the combination of the electrical and mechanical components is proposed. The demonstrator, named humanoid robot, is equivalent with an inverted pendulum. Is presented the analyze of components for associated functions of the humanoid robot. This type of development the mechatronic systems by the combination of hardware and software, offers the opportunity to build the optimal solutions.

  16. Gestalt Principles for Attention and Segmentation in Natural and Artificial Vision Systems

    OpenAIRE

    Kootstra, Gert; Bergström, Niklas; Kragic, Danica

    2011-01-01

    Gestalt psychology studies how the human visual system organizes the complex visual input into unitary elements. In this paper we show how the Gestalt principles for perceptual grouping and for figure-ground segregation can be used in computer vision. A number of studies will be shown that demonstrate the applicability of Gestalt principles for the prediction of human visual attention and for the automatic detection and segmentation of unknown objects by a robotic system. QC 20111115 E...

  17. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  18. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  19. Combining a Novel Computer Vision Sensor with a Cleaning Robot to Achieve Autonomous Pig House Cleaning

    DEFF Research Database (Denmark)

    Andersen, Nils Axel; Braithwaite, Ian David; Blanke, Mogens

    2005-01-01

    condition based cleaning. This paper describes how a novel sensor, developed for the purpose, and algorithms for classification and learning are combined with a commercial robot to obtain an autonomous system which meets the necessary quality attributes. These include features to make selective cleaning...

  20. Automatic Battery Swap System for Home Robots

    Directory of Open Access Journals (Sweden)

    Juan Wu

    2012-12-01

    Full Text Available This paper presents the design and implementation of an automatic battery swap system for the prolonged activities of home robots. A battery swap station is proposed to implement battery off-line recharging and on-line exchanging functions. It consists of a loading and unloading mechanism, a shifting mechanism, a locking device and a shell. The home robot is a palm-sized wheeled robot with an onboard camera and a removable battery case in the front. It communicates with the battery swap station wirelessly through ZigBee. The influences of battery case deflection and robot docking deflection on the battery swap operations have been investigated. The experimental results show that it takes an average time of 84.2s to complete the battery swap operations. The home robot does not have to wait several hours for the batteries to be fully charged. The proposed battery swap system is proved to be efficient in home robot applications that need the robots to work continuously over a long period.

  1. Safety assessment of high consequence robotics system

    International Nuclear Information System (INIS)

    Robinson, D.G.; Atcitty, C.B.

    1996-01-01

    This paper outlines the use of a failure modes and effects analysis for the safety assessment of a robotic system being developed at Sandia National Laboratories. The robotic system, the weigh and leak check system, is to replace a manual process for weight and leakage of nuclear materials at the DOE Pantex facility. Failure modes and effects analyses were completed for the robotics process to ensure that safety goals for the systems have been met. Due to the flexible nature of the robot configuration, traditional failure modes and effects analysis (FMEA) were not applicable. In addition, the primary focus of safety assessments of robotics systems has been the protection of personnel in the immediate area. In this application, the safety analysis must account for the sensitivities of the payload as well as traditional issues. A unique variation on the classical FMEA was developed that permits an organized and quite effective tool to be used to assure that safety was adequately considered during the development of the robotic system. The fundamental aspects of the approach are outlined in the paper

  2. Cask system design guidance for robotic handling

    International Nuclear Information System (INIS)

    Griesmeyer, J.M.; Drotning, W.D.; Morimoto, A.K.; Bennett, P.C.

    1990-10-01

    Remote automated cask handling has the potential to reduce both the occupational exposure and the time required to process a nuclear waste transport cask at a handling facility. The ongoing Advanced Handling Technologies Project (AHTP) at Sandia National Laboratories is described. AHTP was initiated to explore the use of advanced robotic systems to perform cask handling operations at handling facilities for radioactive waste, and to provide guidance to cask designers regarding the impact of robotic handling on cask design. The proof-of-concept robotic systems developed in AHTP are intended to extrapolate from currently available commercial systems to the systems that will be available by the time that a repository would be open for operation. The project investigates those cask handling operations that would be performed at a nuclear waste repository facility during cask receiving and handling. The ongoing AHTP indicates that design guidance, rather than design specification, is appropriate, since the requirements for robotic handling do not place severe restrictions on cask design but rather focus on attention to detail and design for limited dexterity. The cask system design features that facilitate robotic handling operations are discussed, and results obtained from AHTP design and operation experience are summarized. The application of these design considerations is illustrated by discussion of the robot systems and their operation on cask feature mock-ups used in the AHTP project. 11 refs., 11 figs

  3. Ubiquitous Robotic Technology for Smart Manufacturing System.

    Science.gov (United States)

    Wang, Wenshan; Zhu, Xiaoxiao; Wang, Liyu; Qiu, Qiang; Cao, Qixin

    2016-01-01

    As the manufacturing tasks become more individualized and more flexible, the machines in smart factory are required to do variable tasks collaboratively without reprogramming. This paper for the first time discusses the similarity between smart manufacturing systems and the ubiquitous robotic systems and makes an effort on deploying ubiquitous robotic technology to the smart factory. Specifically, a component based framework is proposed in order to enable the communication and cooperation of the heterogeneous robotic devices. Further, compared to the service robotic domain, the smart manufacturing systems are often in larger size. So a hierarchical planning method was implemented to improve the planning efficiency. A test bed of smart factory is developed. It demonstrates that the proposed framework is suitable for industrial domain, and the hierarchical planning method is able to solve large problems intractable with flat methods.

  4. Robots testing robots: ALAN-Arm, a humanoid arm for the testing of robotic rehabilitation systems.

    Science.gov (United States)

    Brookes, Jack; Kuznecovs, Maksims; Kanakis, Menelaos; Grigals, Arturs; Narvidas, Mazvydas; Gallagher, Justin; Levesley, Martin

    2017-07-01

    Robotics is increasing in popularity as a method of providing rich, personalized and cost-effective physiotherapy to individuals with some degree of upper limb paralysis, such as those who have suffered a stroke. These robotic rehabilitation systems are often high powered, and exoskeletal systems can attach to the person in a restrictive manner. Therefore, ensuring the mechanical safety of these devices before they come in contact with individuals is a priority. Additionally, rehabilitation systems may use novel sensor systems to measure current arm position. Used to capture and assess patient movements, these first need to be verified for accuracy by an external system. We present the ALAN-Arm, a humanoid robotic arm designed to be used for both accuracy benchmarking and safety testing of robotic rehabilitation systems. The system can be attached to a rehabilitation device and then replay generated or human movement trajectories, as well as autonomously play rehabilitation games or activities. Tests of the ALAN-Arm indicated it could recreate the path of a generated slow movement path with a maximum error of 14.2mm (mean = 5.8mm) and perform cyclic movements up to 0.6Hz with low gain (<1.5dB). Replaying human data trajectories showed the ability to largely preserve human movement characteristics with slightly higher path length and lower normalised jerk.

  5. Analysis and optimization on in-vessel inspection robotic system for EAST

    International Nuclear Information System (INIS)

    Zhang, Weijun; Zhou, Zeyu; Yuan, Jianjun; Du, Liang; Mao, Ziming

    2015-01-01

    Since China has successfully built her first Experimental Advanced Superconducting TOKAMAK (EAST) several years ago, great interest and demand have been increasing in robotic in-vessel inspection/operation systems, by which an observation of in-vessel physical phenomenon, collection of visual information, 3D mapping and localization, even maintenance are to be possible. However, it has been raising many challenges to implement a practical and robust robotic system, due to a lot of complex constraints and expectations, e.g., high remanent working temperature (100 °C) and vacuum (10"−"3 pa) environment even in the rest interval between plasma discharge experiments, close-up and precise inspection, operation efficiency, besides a general kinematic requirement of D shape irregular vessel. In this paper we propose an upgraded robotic system with redundant degrees of freedom (DOF) manipulator combined with a binocular vision system at the tip and a virtual reality system. A comprehensive comparison and discussion are given on the necessity and main function of the binocular vision system, path planning for inspection, fast localization, inspection efficiency and success rate in time, optimization of kinematic configuration, and the possibility of underactuated mechanism. A detailed design, implementation, and experiments of the binocular vision system together with the recent development progress of the whole robotic system are reported in the later part of the paper, while, future work and expectation are described in the end.

  6. Analysis and optimization on in-vessel inspection robotic system for EAST

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Weijun, E-mail: zhangweijun@sjtu.edu.cn; Zhou, Zeyu; Yuan, Jianjun; Du, Liang; Mao, Ziming

    2015-12-15

    Since China has successfully built her first Experimental Advanced Superconducting TOKAMAK (EAST) several years ago, great interest and demand have been increasing in robotic in-vessel inspection/operation systems, by which an observation of in-vessel physical phenomenon, collection of visual information, 3D mapping and localization, even maintenance are to be possible. However, it has been raising many challenges to implement a practical and robust robotic system, due to a lot of complex constraints and expectations, e.g., high remanent working temperature (100 °C) and vacuum (10{sup −3} pa) environment even in the rest interval between plasma discharge experiments, close-up and precise inspection, operation efficiency, besides a general kinematic requirement of D shape irregular vessel. In this paper we propose an upgraded robotic system with redundant degrees of freedom (DOF) manipulator combined with a binocular vision system at the tip and a virtual reality system. A comprehensive comparison and discussion are given on the necessity and main function of the binocular vision system, path planning for inspection, fast localization, inspection efficiency and success rate in time, optimization of kinematic configuration, and the possibility of underactuated mechanism. A detailed design, implementation, and experiments of the binocular vision system together with the recent development progress of the whole robotic system are reported in the later part of the paper, while, future work and expectation are described in the end.

  7. Optimizing a mobile robot control system using GPU acceleration

    Science.gov (United States)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  8. Siroco, a configurable robot control system

    International Nuclear Information System (INIS)

    Tejedor, B.G.; Maraggi, G.J.B.

    1988-01-01

    The SIROCO (Configurable Robot Control System) is an electronic system designed to work in applications where mechanized remote control equipment and robots are necessary especially in Nuclear Power Plants. The structure of the system (hardware and software) determines the following user characteristics: a) Reduction in the time spent in NDT and in radiation doses absorbed, due to remote control operation; b) possibility for full automation in NDT, c) the system can simultaneously control up to six axes and can generate movements in remote areas; and d) possibility for equipment unification, due to SIROCO being a configurable system. (author)

  9. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  10. Multi-channel automotive night vision system

    Science.gov (United States)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  11. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Mazen Ghandour

    2017-06-01

    Full Text Available In this paper, a novel approach for collision avoidance for indoor mobile robots based on human-robot interaction is realized. The main contribution of this work is a new technique for collision avoidance by engaging the human and the robot in generating new collision-free paths. In mobile robotics, collision avoidance is critical for the success of the robots in implementing their tasks, especially when the robots navigate in crowded and dynamic environments, which include humans. Traditional collision avoidance methods deal with the human as a dynamic obstacle, without taking into consideration that the human will also try to avoid the robot, and this causes the people and the robot to get confused, especially in crowded social places such as restaurants, hospitals, and laboratories. To avoid such scenarios, a reactive-supervised collision avoidance system for mobile robots based on human-robot interaction is implemented. In this method, both the robot and the human will collaborate in generating the collision avoidance via interaction. The person will notify the robot about the avoidance direction via interaction, and the robot will search for the optimal collision-free path on the selected direction. In case that no people interacted with the robot, it will select the navigation path autonomously and select the path that is closest to the goal location. The humans will interact with the robot using gesture recognition and Kinect sensor. To build the gesture recognition system, two models were used to classify these gestures, the first model is Back-Propagation Neural Network (BPNN, and the second model is Support Vector Machine (SVM. Furthermore, a novel collision avoidance system for avoiding the obstacles is implemented and integrated with the HRI system. The system is tested on H20 robot from DrRobot Company (Canada and a set of experiments were implemented to report the performance of the system in interacting with the human and avoiding

  12. System for exchanging tools and end effectors on a robot

    International Nuclear Information System (INIS)

    Burry, D.B.; Williams, P.M.

    1991-01-01

    A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot. 12 figures

  13. Human Robotic Systems (HRS): Controlling Robots over Time Delay Element

    Data.gov (United States)

    National Aeronautics and Space Administration — This element involves the development of software that enables easier commanding of a wide range of NASA relevant robots through the Robot Application Programming...

  14. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    Science.gov (United States)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  15. The Tox21 robotic platform for the assessment of environmental chemicals--from vision to reality.

    Science.gov (United States)

    Attene-Ramos, Matias S; Miller, Nicole; Huang, Ruili; Michael, Sam; Itkin, Misha; Kavlock, Robert J; Austin, Christopher P; Shinn, Paul; Simeonov, Anton; Tice, Raymond R; Xia, Menghang

    2013-08-01

    Since its establishment in 2008, the US Tox21 inter-agency collaboration has made great progress in developing and evaluating cellular models for the evaluation of environmental chemicals as a proof of principle. Currently, the program has entered its production phase (Tox21 Phase II) focusing initially on the areas of modulation of nuclear receptors and stress response pathways. During Tox21 Phase II, the set of chemicals to be tested has been expanded to nearly 10,000 (10K) compounds and a fully automated screening platform has been implemented. The Tox21 robotic system combined with informatics efforts is capable of screening and profiling the collection of 10K environmental chemicals in triplicate in a week. In this article, we describe the Tox21 screening process, compound library preparation, data processing, and robotic system validation. Published by Elsevier Ltd.

  16. An Innovative 3D Ultrasonic Actuator with Multidegree of Freedom for Machine Vision and Robot Guidance Industrial Applications Using a Single Vibration Ring Transducer

    Directory of Open Access Journals (Sweden)

    M. Shafik

    2013-07-01

    Full Text Available This paper presents an innovative 3D piezoelectric ultrasonic actuator using a single flexural vibration ring transducer, for machine vision and robot guidance industrial applications. The proposed actuator is principally aiming to overcome the visual spotlight focus angle of digital visual data capture transducer, digital cameras and enhance the machine vision system ability to perceive and move in 3D. The actuator Design, structures, working principles and finite element analysis are discussed in this paper. A prototype of the actuator was fabricated. Experimental tests and measurements showed the ability of the developed prototype to provide 3D motions of Multidegree of freedom, with typical speed of movement equal to 35 revolutions per minute, a resolution of less than 5μm and maximum load of 3.5 Newton. These initial characteristics illustrate, the potential of the developed 3D micro actuator to gear the spotlight focus angle issue of digital visual data capture transducers and possible improvement that such technology could bring to the machine vision and robot guidance industrial applications.

  17. Robotics and remote systems for hazardous environments

    International Nuclear Information System (INIS)

    Jamshidi, M.; Eicker, P.

    1993-01-01

    This is the first volume in a series of books to be published by Prentice Hall on Environmental and Intelligent Manufacturing Systems. The editors have assembled an interdisciplinary collection of authors from industry, government, and academia, that provide a broad range of expertise on robotics and remote systems. Readily accessible to practicing engineers, the book provides case studies and introduces new technology applicable to remote operations in unstructured and/or hazardous environments. Chapter 1 gives an overview of the US Environmental Protection Agency's efforts to apply robotic technology to assist in the operations at hazardous waste sites. The next chapter focuses on the theory and implementation of robust impedance control for robotic manipulators. Chapter 3 presents a discussion on the integration of failure tolerance into robotic systems. The next two chapters address the issue of sensory feedback and its indispensable role in remote and/or hazardous environments. Chapter 6 presents numerous examples of robots and telemanipulators that have been applied for various tasks at the DOE's Savannah River Site. The following chapter picks up on this theme and discusses the fundamental paradigm shifts that are required in artificial intelligence for robots to deal with hazardous, unstructured, and dynamic environments. Chapter 8 returns to the issue of impedance control first raised in Chapter 2. While the majority of the applications discussed in this book are related to the nuclear industry, chapter 9 considers applying telerobotics for the control of traditional heavy machinery that is widely used in forestry, mining, and construction. The final chapter of the book returns to the topic of artificial intelligence's role in producing increased autonomy for robotic systems and provides an interesting counterpoint to the philosophy of reactive control discussed earlier

  18. Synthetic vision systems: operational considerations simulation experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  19. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  20. Artificial intelligence in robot control systems

    Science.gov (United States)

    Korikov, A.

    2018-05-01

    This paper analyzes modern concepts of artificial intelligence and known definitions of the term "level of intelligence". In robotics artificial intelligence system is defined as a system that works intelligently and optimally. The author proposes to use optimization methods for the design of intelligent robot control systems. The article provides the formalization of problems of robotic control system design, as a class of extremum problems with constraints. Solving these problems is rather complicated due to the high dimensionality, polymodality and a priori uncertainty. Decomposition of the extremum problems according to the method, suggested by the author, allows reducing them into a sequence of simpler problems, that can be successfully solved by modern computing technology. Several possible approaches to solving such problems are considered in the article.

  1. A neuromorphic controller for a robotic vehicle equipped with a dynamic vision sensor

    OpenAIRE

    Blum, Hermann; Dietmüller, Alexander; Milde, Moritz; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic electronic systems exhibit advantageous characteristics, in terms of low energy consumption and low response latency, which can be useful in robotic applications that require compact and low power embedded computing resources. However, these neuromorphic circuits still face significant limitations that make their usage challenging: these include low precision, variability of components, sensitivity to noise and temperature drifts, as well as the currently limited number of neuron...

  2. Informed Design to Robotic Production Systems; Developing Robotic 3D Printing System for Informed Material Deposition

    NARCIS (Netherlands)

    Mostafavi, S.; Bier, H.; Bodea, S.; Anton, A.M.

    2015-01-01

    This paper discusses the development of an informed Design-to-Robotic-Production (D2RP) system for additive manufacturing to achieve performative porosity in architecture at various scales. An extended series of experiments on materiality, fabrication and robotics were designed and carried out

  3. Development of a robot system for converter relining; Tenro chikuro robot system no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Y; Kurahashi, M [Nissan Motor Co. Ltd., Tokyo (Japan)

    1995-09-12

    In steelmaking plants, the relining work of converters requires plenty of manpower and time. Recently, the number of expert brick workers has decreased, and it has been difficult to get together the necessary number of workers for the converter relining. To solve these problems, a robot system has been developed and realized for the converter relining. The system consists of two intelligent robots and an automatic brick conveying machine. With visual function and flexibly controlled hands, the robot enables to heap up bricks in the same manner as expert workers do. The automatic brick conveying machine consists of roller conveyers and a cage lifter that convey bricks on palettes to the suitable position for the robot to easily handle. This robot system has enabled to save much labor for the converter relining. 8 figs.

  4. Inverse Modeling of Human Knee Joint Based on Geometry and Vision Systems for Exoskeleton Applications

    Directory of Open Access Journals (Sweden)

    Eduardo Piña-Martínez

    2015-01-01

    Full Text Available Current trends in Robotics aim to close the gap that separates technology and humans, bringing novel robotic devices in order to improve human performance. Although robotic exoskeletons represent a breakthrough in mobility enhancement, there are design challenges related to the forces exerted to the users’ joints that result in severe injuries. This occurs due to the fact that most of the current developments consider the joints as noninvariant rotational axes. This paper proposes the use of commercial vision systems in order to perform biomimetic joint design for robotic exoskeletons. This work proposes a kinematic model based on irregular shaped cams as the joint mechanism that emulates the bone-to-bone joints in the human body. The paper follows a geometric approach for determining the location of the instantaneous center of rotation in order to design the cam contours. Furthermore, the use of a commercial vision system is proposed as the main measurement tool due to its noninvasive feature and for allowing subjects under measurement to move freely. The application of this method resulted in relevant information about the displacements of the instantaneous center of rotation at the human knee joint.

  5. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  6. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  7. Robot operating system (ROS) the complete reference

    CERN Document Server

    The objective of this book is to provide the reader with a comprehensive coverage on the Robot Operating Systems (ROS) and latest related systems, which is currently considered as the main development framework for robotics applications. The book includes twenty-seven chapters organized into eight parts. Part 1 presents the basics and foundations of ROS. In Part 2, four chapters deal with navigation, motion and planning. Part 3 provides four examples of service and experimental robots. Part 4 deals with real-world deployment of applications. Part 5 presents signal-processing tools for perception and sensing. Part 6 provides software engineering methodologies to design complex software with ROS. Simulations frameworks are presented in Part 7. Finally, Part 8 presents advanced tools and frameworks for ROS including multi-master extension, network introspection, controllers and cognitive systems. This book will be a valuable companion for ROS users and developers to learn more ROS capabilities and features.   ...

  8. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    International Nuclear Information System (INIS)

    Ren, Y J; Zhu, J G; Yang, X Y; Ye, S H

    2006-01-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent

  9. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    Science.gov (United States)

    Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.

    2006-10-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.

  10. Neurosurgical robotic arm drilling navigation system.

    Science.gov (United States)

    Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai

    2017-09-01

    The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Adaptive Robotic Systems Design in University of Applied Sciences

    Directory of Open Access Journals (Sweden)

    Gunsing Jos

    2016-01-01

    Full Text Available In the industry for highly specialized machine building (small series with high variety and high complexity and in healthcare a demand for adaptive robotics is rapidly coming up. Technically skilled people are not always available in sufficient numbers. A lot of know how with respect to the required technologies is available but successful adaptive robotic system designs are still rare. In our research at the university of applied sciences we incorporate new available technologies in our education courses by way of research projects; in these projects students will investigate the application possibilities of new technologies together with companies and teachers. Thus we are able to transfer knowledge to the students including an innovation oriented attitude and skills. Last years we developed several industrial binpicking applications for logistics and machining-factories with different types of 3D vision. Also force feedback gripping has been developed including slip sensing. Especially for healthcare robotics we developed a so-called twisted wire actuator, which is very compact in combination with an underactuated gripper, manufactured in one piece in polyurethane. We work both on modeling and testing the functions of these designs but we work also on complete demonstrator systems. Since the amount of disciplines involved in complex product and machine design increases rapidly we pay a lot of attention with respect to systems engineering methods. Apart from the classical engineering disciplines like mechanical, electrical, software and mechatronics engineering, especially for adaptive robotics more and more disciplines like industrial product design, communication … multimedia design and of course physics and even art are to be involved depending on the specific application to be designed. Design tools like V-model, agile/scrum and design-approaches to obtain the best set of requirements are being implemented in the engineering studies from

  12. An Address Event Representation-Based Processing System for a Biped Robot

    Directory of Open Access Journals (Sweden)

    Uziel Jaramillo-Avila

    2016-02-01

    Full Text Available In recent years, several important advances have been made in the fields of both biologically inspired sensorial processing and locomotion systems, such as Address Event Representation-based cameras (or Dynamic Vision Sensors and in human-like robot locomotion, e.g., the walking of a biped robot. However, making these fields merge properly is not an easy task. In this regard, Neuromorphic Engineering is a fast-growing research field, the main goal of which is the biologically inspired design of hybrid hardware systems in order to mimic neural architectures and to process information in the manner of the brain. However, few robotic applications exist to illustrate them. The main goal of this work is to demonstrate, by creating a closed-loop system using only bio-inspired techniques, how such applications can work properly. We present an algorithm using Spiking Neural Networks (SNN for a biped robot equipped with a Dynamic Vision Sensor, which is designed to follow a line drawn on the floor. This is a commonly used method for demonstrating control techniques. Most of them are fairly simple to implement without very sophisticated components; however, it can still serve as a good test in more elaborate circumstances. In addition, the locomotion system proposed is able to coordinately control the six DOFs of a biped robot in switching between basic forms of movement. The latter has been implemented as a FPGA-based neuromorphic system. Numerical tests and hardware validation are presented.

  13. Master-slave robotic system for needle indentation and insertion.

    Science.gov (United States)

    Shin, Jaehyun; Zhong, Yongmin; Gu, Chengfan

    2017-12-01

    Bilateral control of a master-slave robotic system is a challenging issue in robotic-assisted minimally invasive surgery. It requires the knowledge on contact interaction between a surgical (slave) robot and soft tissues. This paper presents a master-slave robotic system for needle indentation and insertion. This master-slave robotic system is able to characterize the contact interaction between the robotic needle and soft tissues. A bilateral controller is implemented using a linear motor for robotic needle indentation and insertion. A new nonlinear state observer is developed to online monitor the contact interaction with soft tissues. Experimental results demonstrate the efficacy of the proposed master-slave robotic system for robotic needle indentation and needle insertion.

  14. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  15. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  16. The Development of Radiation hardened tele-robot system - Development of artificial force reflection control for teleoperated mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ju Jang; Hong, Sun Gi; Kang, Young Hoon; Kim, Min Soeng [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-04-01

    One of the most important issues in teleoperation is to provide the sense of telepresence so as to conduct the task more reliably. In particular, teleoperated mobile robots are needed to have some kinds of backup system when the operator is blind for remote situation owing to the failure of vision system. In the first year, the idea of artificial force reflection was researched to enhance the reliability of operation when the mobile robot travels on the plain ground. In the second year, we extend previous results to help the teleoperator even when the robot climbs stairs. Finally, we apply the developed control algorithms to real experiments. The artificial force reflection method has two modes; traveling on the plain ground and climbing stairs. When traveling on the plain ground, the force information is artificially generated by using the range data from the environment while generating the impulse force when climbing stairs. To verify the validity of our algorithm, we develop the simulator which consists of the joystick and the visual display system. Through some experiments using this system, we confirm the validity and effectiveness of our new idea of artificial force reflection in the teleoperated mobile robot. 11 refs., 30 figs. (Author)

  17. Robotized production systems observed in modern plants

    Science.gov (United States)

    Saverina, A. N.

    1985-09-01

    Robots, robotized lines and sectors are no longer innovations in shops at automotive plants. The widespread robotization of automobile assembly operations is described in general terms. Robot use for machining operation is also discussed.

  18. A novel teaching system for industrial robots.

    Science.gov (United States)

    Lin, Hsien-I; Lin, Yu-Hsiang

    2014-03-27

    The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles.

  19. Biological Immune System Applications on Mobile Robot for Disabled People

    Directory of Open Access Journals (Sweden)

    Songmin Jia

    2014-01-01

    Full Text Available To improve the service quality of service robots for the disabled, immune system is applied on robot for its advantages such as diversity, dynamic, parallel management, self-organization, and self-adaptation. According to the immune system theory, local environment condition sensed by robot is considered an antigen while robot is regarded as B-cell and possible node as antibody, respectively. Antibody-antigen affinity is employed to choose the optimal possible node to ensure the service robot can pass through the optimal path. The paper details the immune system applications on service robot and gives experimental results.

  20. Multi-sensor measurement system for robotic drilling

    OpenAIRE

    Frommknecht, Andreas; Kühnle, Jens; Pidan, Sergej; Effenberger, Ira

    2015-01-01

    A multi-sensor measurement system for robotic drilling is presented. The system enables a robot to measure its 6D pose with respect to the work piece and to establish a reference coordinate system for drilling. The robot approaches the drill point and performs an orthogonal alignment with the work piece. Although the measurement systems are readily capable of achieving high position accuracy and low deviation to perpendicularity, experiments show that inaccuracies in the robot's 6D-pose and e...

  1. Color-based scale-invariant feature detection applied in robot vision

    Science.gov (United States)

    Gao, Jian; Huang, Xinhan; Peng, Gang; Wang, Min; Li, Xinde

    2007-11-01

    The scale-invariant feature detecting methods always require a lot of computation yet sometimes still fail to meet the real-time demands in robot vision fields. To solve the problem, a quick method for detecting interest points is presented. To decrease the computation time, the detector selects as interest points those whose scale normalized Laplacian values are the local extrema in the nonholonomic pyramid scale space. The descriptor is built with several subregions, whose width is proportional to the scale factor, and the coordinates of the descriptor are rotated in relation to the interest point orientation just like the SIFT descriptor. The eigenvector is computed in the original color image and the mean values of the normalized color g and b in each subregion are chosen to be the factors of the eigenvector. Compared with the SIFT descriptor, this descriptor's dimension has been reduced evidently, which can simplify the point matching process. The performance of the method is analyzed in theory in this paper and the experimental results have certified its validity too.

  2. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  3. Foraging behavior analysis of swarm robotics system

    Directory of Open Access Journals (Sweden)

    Sakthivelmurugan E.

    2018-01-01

    Full Text Available Swarm robotics is a number of small robots that are synchronically works together to accomplish a given task. Swarm robotics faces many problems in performing a given task. The problems are pattern formation, aggregation, Chain formation, self-assembly, coordinated movement, hole avoidance, foraging and self-deployment. Foraging is most essential part in swarm robotics. Foraging is the task to discover the item and get back into the shell. The researchers conducted foraging experiments with random-movement of robots and they have end up with unique solutions. Most of the researchers have conducted experiments using the circular arena. The shell is placed at the centre of the arena and environment boundary is well known. In this study, an attempt is made to different strategic movements like straight line approach, parallel line approach, divider approach, expanding square approach, and parallel sweep approach. All these approaches are to be simulated by using player/stage open-source simulation software based on C and C++ programming language in Linux operating system. Finally statistical comparison will be done with task completion time of all these strategies using ANOVA to identify the significant searching strategy.

  4. A concept of distributed architecture for maintenance robot systems

    International Nuclear Information System (INIS)

    Asama, Hajime

    1990-01-01

    Aiming at development of a robot system for maintenance tasks in nuclear power plants, a concept of distributed architecture for autonomous robot systems is discussed. At first, based on investigation of maintenance tasks, requirements for maintenance robots are introduced, and structures to realize multi-functions are discussed. Then, as a new design strategy of maintenance robot system, an autonomous and decentralized robot systems is proposed, which is composed of multiple robots, computers, and equipments, and concept of ACTRESS (ACTor-based Robots and Equipments Synthetic System) including communication framework between robotic components is designed. Finally, as a model of ACTRESS, a experimental system is developed, which deals with object-pushing tasks by two micromice and an environment modeler with communicating with each other. Both of parallel independent motion and cooperative motion based on communication is reconciled, and the efficiency of the distributed architecture is verified. (author)

  5. Multi-Locomotion Robotic Systems New Concepts of Bio-inspired Robotics

    CERN Document Server

    Fukuda, Toshio; Sekiyama, Kosuke; Aoyama, Tadayoshi

    2012-01-01

    Nowadays, multiple attention have been paid on a robot working in the human living environment, such as in the field of medical, welfare, entertainment and so on. Various types of researches are being conducted actively in a variety of fields such as artificial intelligence, cognitive engineering, sensor- technology, interfaces and motion control. In the future, it is expected to realize super high functional human-like robot by integrating technologies in various fields including these types of researches. The book represents new developments and advances in the field of bio-inspired robotics research introducing the state of the art, the idea of multi-locomotion robotic system to implement the diversity of animal motion. It covers theoretical and computational aspects of Passive Dynamic Autonomous Control (PDAC), robot motion control, multi legged walking and climbing as well as brachiation focusing concrete robot systems, components and applications. In addition, gorilla type robot systems are described as...

  6. Development of an advanced robot manipulator system

    International Nuclear Information System (INIS)

    Oomichi, Takeo; Higuchi, Masaru; Shimizu, Yujiro; Ohnishi, Ken

    1991-01-01

    A sophisticated manipulator system for an advanced robot was developed under the 'Advanced Robot Technology Development' Program promoted and supported by the Agency of Industrial Science and Technology of MITI. The authors have participated in the development of a fingered manipulator with force and tactile sensors applicable to a masterslave robot system. Our slave manipulator is equipped with four fingers. Though the finger needs many degrees of freedom so as to be suitable for skilful handing of an object, our fingers are designed to have minimum degree of freedom in order to reduce weight. Each finger tip was designed to be similar to a human finger which has flexibility, softness and contact feeling. The shape of the master finger manipulator was so designed that the movement of the fingers is smoother and that the constraint feeling of the operator is smaller. We were adopted to a pneumatic pressure system for transmitting the tactile feeling of the slave fingers to the master fingers. A multiple sensory bilateral control system which gives an operator a feeling of force and tactile reduces his feeling of constraint in carrying out work with a robot system. (author)

  7. Robotics virtual rail system and method

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-07-05

    A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.

  8. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  9. SYSTEME MULTISENSEUR DE PERCEPTION 3D POUR LE ROBOT MOBILE HILARE

    OpenAIRE

    Ferrer , Michel

    1982-01-01

    L'ETUDE PRESENTEE S'INSERE DANS LE VASTE DOMAINE DE LA VISION ARTIFICIELLE. ELLE CONCERNE PLUS PARTICULIEREMENT L'INTEGRATION DU SYSTEME DE PERCEPTION TROIS DIMENSIONS (3D) DU ROBOT MOBILE AUTONOME HILARE. CE SYSTEME EST COMPOSE D'UNE CAMERA MATRICIELLE A SEMICONDUCTEURS, D'UN TELEMETRE LASER ET D'UNE STRUCTURE MECANIQUE ASSURANT LA DEFLEXION DU FAISCEAU LASER. DANS CE MEMOIRE SONT DECRITS: LA CONCEPTION DE LA STRUCTURE DEFLECTRICE; LE LOGICIEL DE TRAITEMENT DES IMAGES VIDEO MULTINIVEAUX BASE...

  10. Intelligent monitoring-based safety system of massage robot

    Institute of Scientific and Technical Information of China (English)

    胡宁; 李长胜; 王利峰; 胡磊; 徐晓军; 邹雲鹏; 胡玥; 沈晨

    2016-01-01

    As an important attribute of robots, safety is involved in each link of the full life cycle of robots, including the design, manufacturing, operation and maintenance. The present study on robot safety is a systematic project. Traditionally, robot safety is defined as follows: robots should not collide with humans, or robots should not harm humans when they collide. Based on this definition of robot safety, researchers have proposed ex ante and ex post safety standards and safety strategies and used the risk index and risk level as the evaluation indexes for safety methods. A massage robot realizes its massage therapy function through applying a rhythmic force on the massage object. Therefore, the traditional definition of safety, safety strategies, and safety realization methods cannot satisfy the function and safety requirements of massage robots. Based on the descriptions of the environment of massage robots and the tasks of massage robots, the present study analyzes the safety requirements of massage robots; analyzes the potential safety dangers of massage robots using the fault tree tool; proposes an error monitoring-based intelligent safety system for massage robots through monitoring and evaluating potential safety danger states, as well as decision making based on potential safety danger states; and verifies the feasibility of the intelligent safety system through an experiment.

  11. An Automatic Assembling System for Sealing Rings Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2017-01-01

    Full Text Available In order to grab and place the sealing rings of battery lid quickly and accurately, an automatic assembling system for sealing rings based on machine vision is developed in this paper. The whole system is composed of the light sources, cameras, industrial control units, and a 4-degree-of-freedom industrial robot. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Finally the sealing rings are placed on the sealing ports of battery lid accurately and automatically. Experimental results demonstrate that the proposed system can grab the sealing rings and place them on the sealing port of the fast-moving battery lid successfully. More importantly, the proposed system can improve the efficiency of the battery production line obviously.

  12. Modeling and Control of Collaborative Robot System using Haptic Feedback

    Directory of Open Access Journals (Sweden)

    Vivekananda Shanmuganatha

    2017-08-01

    Full Text Available When two robot systems can share understanding using any agreed knowledge, within the constraints of the system’s communication protocol, the approach may lead to a common improvement. This has persuaded numerous new research inquiries in human-robot collaboration. We have built up a framework prepared to do independent following and performing table-best protest object manipulation with humans and we have actualized two different activity models to trigger robot activities. The idea here is to explore collaborative systems and to build up a plan for them to work in a collaborative environment which has many benefits to a single more complex system. In the paper, two robots that cooperate among themselves are constructed. The participation linking the two robotic arms, the torque required and parameters are analyzed. Thus the purpose of this paper is to demonstrate a modular robot system which can serve as a base on aspects of robotics in collaborative robots using haptics.

  13. Dynamic analysis of space robot remote control system

    Science.gov (United States)

    Kulakov, Felix; Alferov, Gennady; Sokolov, Boris; Gorovenko, Polina; Sharlay, Artem

    2018-05-01

    The article presents analysis on construction of two-stage remote control for space robots. This control ensures efficiency of the robot control system at large delays in transmission of control signals from the ground control center to the local control system of the space robot. The conditions for control stability of and high transparency are found.

  14. Robots, systems, and methods for hazard evaluation and visualization

    Science.gov (United States)

    Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.; Hartley, Robert S.; Gertman, David I.; Kinoshita, Robert A.; Whetten, Jonathan

    2013-01-15

    A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximate the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.

  15. An expert system for automated robotic grasping

    International Nuclear Information System (INIS)

    Stansfield, S.A.

    1990-01-01

    Many US Department of Energy sites and facilities will be environmentally remediated during the next several decades. A number of the restoration activities (e.g., decontamination and decommissioning of inactive nuclear facilities) can only be carried out by remote means and will be manipulation-intensive tasks. Experience has shown that manipulation tasks are especially slow and fatiguing for the human operator of a remote manipulator. In this paper, the authors present a rule-based expert system for automated, dextrous robotic grasping. This system interprets the features of an object to generate hand shaping and wrist orientation for a robot hand and arm. The system can be used in several different ways to lessen the demands on the human operator of a remote manipulation system - either as a fully autonomous grasping system or one that generates grasping options for a human operator and then automatically carries out the selected option

  16. Mechanical deployment system on aries an autonomous mobile robot

    International Nuclear Information System (INIS)

    Rocheleau, D.N.

    1995-01-01

    ARIES (Autonomous Robotic Inspection Experimental System) is under development for the Department of Energy (DOE) to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. This paper focuses on the mechanical deployment system-referred to as the camera positioning system (CPS)-used in the project. The CPS is used for positioning four identical but separate camera packages consisting of vision cameras and other required sensors such as bar-code readers and light stripe projectors. The CPS is attached to the top of a mobile robot and consists of two mechanisms. The first is a lift mechanism composed of 5 interlocking rail-elements which starts from a retracted position and extends upward to simultaneously position 3 separate camera packages to inspect the top three drums of a column of four drums. The second is a parallelogram special case Grashof four-bar mechanism which is used for positioning a camera package on drums on the floor. Both mechanisms are the subject of this paper, where the lift mechanism is discussed in detail

  17. Hi-Vision telecine system using pickup tube

    Science.gov (United States)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  18. SRAO: the first southern robotic AO system

    Science.gov (United States)

    Law, Nicholas M.; Ziegler, Carl; Tokovinin, Andrei

    2016-08-01

    We present plans for SRAO, the first Southern Robotic AO system. SRAO will use AO-assisted speckle imaging and Robo-AO-heritage high efficiency observing to confirm and characterize thousands of planet candidates produced by major new transit surveys like TESS, and is the first AO system to be capable of building a comprehensive several-thousand-target multiplicity survey at sub-AU scales across the main sequence. We will also describe results from Robo-AO, the first robotic LGS-AO system. Robo-AO has observed tens of thousands of Northern targets, often using a similar speckle or Lucky-Imaging assisted mode. SRAO will be a moderate-order natural-guide-star adaptive optics system which uses an innovative photoncounting wavefront sensor and EMCCD speckle-imaging camera to guide on faint stars with the 4.1m SOAR telescope. The system will produce diffraction-limited imaging in the NIR on targets as faint as mν = 16. In AO-assisted speckle imaging mode the system will attain the 30-mas visible diffraction limit on targets at least as faint as mν = 17. The system will be the first Southern hemisphere robotic adaptive optics system, with overheads an order of magnitude smaller than comparable systems. Using Robo-AO's proven robotic AO software, SRAO will be capable of observing overheads on sub-minute scales, allowing the observation of at least 200 targets per night. SRAO will attain three times the angular resolution of the Palomar Robo-AO system in the visible.

  19. Robotically assisted MRgFUS system

    Science.gov (United States)

    Jenne, Jürgen W.; Krafft, Axel J.; Maier, Florian; Rauschenberg, Jaane; Semmler, Wolfhard; Huber, Peter E.; Bock, Michael

    2010-03-01

    Magnetic resonance imaging guided focus ultrasound surgery (MRgFUS) is a highly precise method to ablate tissue non-invasively. The objective of this ongoing work is to establish an MRgFUS therapy unit consisting of a specially designed FUS applicator as an add-on to a commercial robotic assistance system originally designed for percutaneous needle interventions in whole-body MRI systems. The fully MR compatible robotic assistance system InnoMotion™ (Synthes Inc., West Chester, USA; formerly InnoMedic GmbH, Herxheim, Germany) offers six degrees of freedom. The developed add-on FUS treatment applicator features a fixed focus ultrasound transducer (f = 1.7 MHz; f' = 68 mm, NA = 0.44, elliptical shaped -6-dB-focus: 8.1 mm length; O/ = 1.1 mm) embedded in a water-filled flexible bellow. A Mylar® foil is used as acoustic window encompassed by a dedicated MRI loop coil. For FUS application, the therapy unit is directly connected to the head of the robotic system, and the treatment region is targeted from above. A newly in-house developed software tool allowed for complete remote control of the MRgFUS-robot system and online analysis of MRI thermometry data. The system's ability for therapeutic relevant focal spot scanning was tested in a closed-bore clinical 1.5 T MR scanner (Magnetom Symphony, Siemens AG, Erlangen, Germany) in animal experiments with pigs. The FUS therapy procedure was performed entirely under MRI guidance including initial therapy planning, online MR-thermometry, and final contrast enhanced imaging for lesion detection. In vivo trials proved the MRgFUS-robot system as highly MR compatible. MR-guided focal spot scanning experiments were performed and a well-defined pattern of thermal tissue lesions was created. A total in vivo positioning accuracy of the US focus better than 2 mm was estimated which is comparable to existing MRgFUS systems. The newly developed FUS-robotic system offers an accurate, highly flexible focus positioning. With its access

  20. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  1. Navigation of robotic system using cricket motes

    Science.gov (United States)

    Patil, Yogendra J.; Baine, Nicholas A.; Rattan, Kuldip S.

    2011-06-01

    This paper presents a novel algorithm for self-mapping of the cricket motes that can be used for indoor navigation of autonomous robotic systems. The cricket system is a wireless sensor network that can provide indoor localization service to its user via acoustic ranging techniques. The behavior of the ultrasonic transducer on the cricket mote is studied and the regions where satisfactorily distance measurements can be obtained are recorded. Placing the motes in these regions results fine-grain mapping of the cricket motes. Trilateration is used to obtain a rigid coordinate system, but is insufficient if the network is to be used for navigation. A modified SLAM algorithm is applied to overcome the shortcomings of trilateration. Finally, the self-mapped cricket motes can be used for navigation of autonomous robotic systems in an indoor location.

  2. Vision Systems with the Human in the Loop

    Science.gov (United States)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  3. Vision Systems with the Human in the Loop

    Directory of Open Access Journals (Sweden)

    Bauckhage Christian

    2005-01-01

    Full Text Available The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  4. Robotics

    Science.gov (United States)

    Popov, E. P.; Iurevich, E. I.

    The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.

  5. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  6. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  7. A Vision Controlled Robot to Detect and Collect Fallen Hot Cobalt60 Capsules inside Wet Storage Pool of Cobalt60 Irradiators

    International Nuclear Information System (INIS)

    Solyman, A.E.M.

    2015-01-01

    In a typical irradiator that use radioactive cobalt-60 capsules source is one of the peaceful uses of atomic energy, it originated strategy in terms of its importance in the sterilization of medical products and food processing from bacteria and fungi before being exported. However, there are several well-known problems related to the fall of the cobalt-60 capsules inside the wet storage pool as a result of manufacturing defects, defects welds or a problem occurs in the vertical movement of the radioactive source rack. Therefore it is necessary to study this problem and solve it in a scientific way so as to keep the human as much as possible from radiation exposure, according to the principles of radiation protection and safety issued by the International Atomic Energy Agency. The present work considers the possibility to use a vision based control arm robot to collect fallen hot cobalt-60 capsules inside wet storage pool. A 5-DOF arm robot is designed and vision algorithms are established to pick the fallen capsule on the bottom surface of the storage pool, read the information printed on its edge (cap) and move it to a safe storage place. Two object detection approaches are studied; RGB-based filter and background subtraction technique. Vision algorithms and camera calibration are done using MATLAB/SIMULINK program. Robot arm forward and inverse kinematics are developed and programmed using an embedded micro controller system. Experiments show the validity of the proposed system and prove its success. The collecting process will be done without interference of operators, so radiation safety will be increased. The results showed camera calibration equations accuracy. And also the presence of vibrations in the hands of the movement of the robot and thus were seized motor rotation speed to 10 degrees per second to avoid these vibrations.This scientific application keeps the operators as much as possible from radiation exposure so it leads to increase radiation

  8. Intensity measurement of automotive headlamps using a photometric vision system

    Science.gov (United States)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  9. Virtual tutor systems for robot-assisted instruction

    Science.gov (United States)

    Zhao, Zhijing; Zhao, Deyu; Zhang, Zizhen; Wei, Yongji; Qi, Bingchen; Okawa, Yoshikuni

    2004-03-01

    Virtual Reality technology belongs to advanced computer technology, it has been applied in instruction field and gains obvious effect. At the same time, robot assisted instruction comes true with the continuous development of Robot technology and artificial intelligence technology. This paper introduces a virtual tutor system for robot assisted instruction.

  10. Model-based systems engineering to design collaborative robotics applications

    NARCIS (Netherlands)

    Hernandez Corbato, Carlos; Fernandez-Sanchez, Jose Luis; Rassa, Bob; Carbone, Paolo

    2017-01-01

    Novel robot technologies are becoming available to automate more complex tasks, more flexibly, and collaborating with humans. Methods and tools are needed in the automation and robotics industry to develop and integrate this new breed of robotic systems. In this paper, the ISE&PPOOA

  11. Laboratory robotics systems at the Savannah River Laboratory

    International Nuclear Information System (INIS)

    Dyches, G.M.; Burkett, S.D.

    1983-01-01

    Many analytical chemistry methods normally used at the Savannah River site require repetitive procedures and handling of radioactive and other hazardous solutions. Robotics is being investigated as a method of reducing personnel fatigue and radiation exposure and also increasing product quality. Several applications of various commercially available robot systems are discussed involving cold (nonradioactive) and hot (radioactive) sample preparations and glovebox waste removal. Problems encountered in robot programming, parts fixturing, design of special robot hands and other support equipment, glovebox operation, and operator-system interaction are discussed. A typical robot system cost analysis for one application is given

  12. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter.

    Science.gov (United States)

    Alatise, Mary B; Hancke, Gerhard P

    2017-09-21

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

  13. Machine Vision Tests for Spent Fuel Scrap Characteristics

    International Nuclear Information System (INIS)

    BERGER, W.W.

    2000-01-01

    The purpose of this work is to perform a feasibility test of a Machine Vision system for potential use at the Hanford K basins during spent nuclear fuel (SNF) operations. This report documents the testing performed to establish functionality of the system including quantitative assessment of results. Fauske and Associates, Inc., which has been intimately involved in development of the SNF safety basis, has teamed with Agris-Schoen Vision Systems, experts in robotics, tele-robotics, and Machine Vision, for this work

  14. A volumetric data system for environmental robotics

    International Nuclear Information System (INIS)

    Tourtellott, J.

    1994-01-01

    A three-dimensional, spatially organized or volumetric data system provides an effective means for integrating and presenting environmental sensor data to robotic systems and operators. Because of the unstructed nature of environmental restoration applications, new robotic control strategies are being developed that include environmental sensors and interactive data interpretation. The volumetric data system provides key features to facilitate these new control strategies including: integrated representation of surface, subsurface and above-surface data; differentiation of mapped and unmapped regions in space; sculpting of regions in space to best exploit data from line-of-sight sensors; integration of diverse sensor data (for example, dimensional, physical/geophysical, chemical, and radiological); incorporation of data provided at different spatial resolutions; efficient access for high-speed visualization and analysis; and geometric modeling tools to update a open-quotes world modelclose quotes of an environment. The applicability to underground storage tank remediation and buried waste site remediation are demonstrated in several examples. By integrating environmental sensor data into robotic control, the volumetric data system will lead to safer, faster, and more cost-effective environmental cleanup

  15. Robotic neurorehabilitation system design for stroke patients

    Directory of Open Access Journals (Sweden)

    Baoguo Xu

    2015-03-01

    Full Text Available In this article, a neurorehabilitation system combining robot-aided rehabilitation with motor imagery–based brain–computer interface is presented. Feature extraction and classification algorithm for the motor imagery electroencephalography is implemented under our brain–computer interface research platform. The main hardware platform for functional recovery therapy is the Barrett Whole-Arm Manipulator. The mental imagination of upper limb movements is translated to trigger the Barrett Whole-Arm Manipulator Arm to stretch the affected upper limb to move along the predefined trajectory. A fuzzy proportional–derivative position controller is proposed to control the Whole-Arm Manipulator Arm to perform passive rehabilitation training effectively. A preliminary experiment aimed at testing the proposed system and gaining insight into the potential of motor imagery electroencephalography-triggered robotic therapy is reported.

  16. Assessment of Laparoscopic Skills Performance: 2D Versus 3D Vision and Classic Instrument Versus New Hand-Held Robotic Device for Laparoscopy.

    Science.gov (United States)

    Leite, Mariana; Carvalho, Ana F; Costa, Patrício; Pereira, Ricardo; Moreira, Antonio; Rodrigues, Nuno; Laureano, Sara; Correia-Pinto, Jorge; Vilaça, João L; Leão, Pedro

    2016-02-01

    Laparoscopic surgery has undeniable advantages, such as reduced postoperative pain, smaller incisions, and faster recovery. However, to improve surgeons' performance, ergonomic adaptations of the laparoscopic instruments and introduction of robotic technology are needed. The aim of this study was to ascertain the influence of a new hand-held robotic device for laparoscopy (HHRDL) and 3D vision on laparoscopic skills performance of 2 different groups, naïve and expert. Each participant performed 3 laparoscopic tasks-Peg transfer, Wire chaser, Knot-in 4 different ways. With random sequencing we assigned the execution order of the tasks based on the first type of visualization and laparoscopic instrument. Time to complete each laparoscopic task was recorded and analyzed with one-way analysis of variance. Eleven experts and 15 naïve participants were included. Three-dimensional video helps the naïve group to get better performance in Peg transfer, Wire chaser 2 hands, and Knot; the new device improved the execution of all laparoscopic tasks (P < .05). For expert group, the 3D video system benefited them in Peg transfer and Wire chaser 1 hand, and the robotic device in Peg transfer, Wire chaser 1 hand, and Wire chaser 2 hands (P < .05). The HHRDL helps the execution of difficult laparoscopic tasks, such as Knot, in the naïve group. Three-dimensional vision makes the laparoscopic performance of the participants without laparoscopic experience easier, unlike those with experience in laparoscopic procedures. © The Author(s) 2015.

  17. A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033

    Science.gov (United States)

    Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.

    2012-01-01

    The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars

  18. THE SYSTEM OF TECHNICAL VISION IN THE ARCHITECTURE OF THE REMOTE CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    S. V. Shavetov

    2014-03-01

    Full Text Available The paper deals with the development of video broadcasting system in view of controlling mobile robots over the Internet. A brief overview of the issues and their solutions, encountered in the real-time broadcasting video stream, is given. Affordable and versatile solutions of technical vision are considered. An approach for frame-accurate video rebroadcasting to unlimited number of end-users is proposed. The optimal performance parameters of network equipment for the final number of cameras are defined. System approbation on five IP cameras of different manufacturers is done. The average time delay for broadcasting in MJPEG format over the local network was 200 ms and 500 ms over the Internet.

  19. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite

  20. Message Encryption in Robot Operating System: Collateral Effects of Hardening Mobile Robots

    Directory of Open Access Journals (Sweden)

    Francisco J. Rodríguez-Lera

    2018-03-01

    Full Text Available In human–robot interaction situations, robot sensors collect huge amounts of data from the environment in order to characterize the situation. Some of the gathered data ought to be treated as private, such as medical data (i.e., medication guidelines, personal, and safety information (i.e., images of children, home habits, alarm codes, etc.. However, most robotic software development frameworks are not designed for securely managing this information. This paper analyzes the scenario of hardening one of the most widely used robotic middlewares, Robot Operating System (ROS. The study investigates a robot’s performance when ciphering the messages interchanged between ROS nodes under the publish/subscribe paradigm. In particular, this research focuses on the nodes that manage cameras and LIDAR sensors, which are two of the most extended sensing solutions in mobile robotics, and analyzes the collateral effects on the robot’s achievement under different computing capabilities and encryption algorithms (3DES, AES, and Blowfish to robot performance. The findings present empirical evidence that simple encryption algorithms are lightweight enough to provide cyber-security even in low-powered robots when carefully designed and implemented. Nevertheless, these techniques come with a number of serious drawbacks regarding robot autonomy and performance if they are applied randomly. To avoid these issues, we define a taxonomy that links the type of ROS message, computational units, and the encryption methods. As a result, we present a model to select the optimal options for hardening a mobile robot using ROS.

  1. Vision system for dial gage torque wrench calibration

    Science.gov (United States)

    Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.

    1993-11-01

    In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.

  2. An Intuitive Robot Teleoperation System for Nuclear Power Plant Decommissioning

    International Nuclear Information System (INIS)

    Lee, Chang-hyuk; Gu, Taehyeong; Lee, Kyung-min; Ye, Sung-Joon; Bang, Young-bong

    2017-01-01

    A robot teleoperation system consists of a master device and a slave robot. The master device senses human intention and delivers it to the salve robot. A haptic device and an exoskeletal robot are widely used as the master device. The slave robot carries out operations delivered by the master device. It should guarantee enough degree of freedom (DOF) to perform the instructed operation and mobility in the environment inside the nuclear plant, such as flat surfaces and stairs. A 7-DOF robotic arm is commonly used as the slave device. This paper proposed a robot teleoperation system for nuclear power plant decommissioning. It discussed an experiment that was performed to validate the system's usability. The operator wearing the exoskeletal master device at the master site controlled the slave robot enabling it to move on a flat surface, climb/descend stairs, and move obstacles. The proposed robot teleoperation system can also be used in hazardous working environments where the use of such robots would be beneficial to human health and safety. In the future, research studies on the protection against radiation that damages the slave robot should be conducted.

  3. A Vision-Based System for Object Identification and Information Retrieval in a Smart Home

    Science.gov (United States)

    Grech, Raphael; Monekosso, Dorothy; de Jager, Deon; Remagnino, Paolo

    This paper describes a hand held device developed to assist people to locate and retrieve information about objects in a home. The system developed is a standalone device to assist persons with memory impairments such as people suffering from Alzheimer's disease. A second application is object detection and localization for a mobile robot operating in an ambient assisted living environment. The device relies on computer vision techniques to locate a tagged object situated in the environment. The tag is a 2D color printed pattern with a detection range and a field of view such that the user may point from a distance of over 1 meter.

  4. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    Science.gov (United States)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  5. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    Science.gov (United States)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  6. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Jones, J.P.

    1993-01-01

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  7. Modeling and Control of Underwater Robotic Systems

    Energy Technology Data Exchange (ETDEWEB)

    Schjoelberg, I:

    1996-12-31

    This doctoral thesis describes modeling and control of underwater vehicle-manipulator systems. The thesis also presents a model and a control scheme for a system consisting of a surface vessel connected to an underwater robotic system by means of a slender marine structure. The equations of motion of the underwater vehicle and manipulator are described and the system kinematics and properties presented. Feedback linearization technique is applied to the system and evaluated through a simulation study. Passivity-based controllers for vehicle and manipulator control are presented. Stability of the closed loop system is proved and simulation results are given. The equation of motion for lateral motion of a cable/riser system connected to a surface vessel at the top end and to a thruster at the bottom end is described and stability analysis and simulations are presented. The equations of motion in 3 degrees of freedom of the cable/riser, surface vessel and robotic system are given. Stability analysis of the total system with PD-controllers is presented. 47 refs., 32 figs., 7 tabs.

  8. Visions of sustainable urban energy systems. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Pietzsch, Ursula [HFT Stuttgart (Germany). zafh.net - Centre of Applied Research - Sustainable Energy Technology; Mikosch, Milena [Steinbeis-Zentrum, Stuttgart (Germany). Europaeischer Technologietransfer; Liesner, Lisa (eds.)

    2010-09-15

    Within the polycity final conference from 15th to 17th September, 2010, in Stuttgart (Federal Republic of Germany) the following lectures were held: (1) Visions of sustainable urban energy system (Ursula Eicker); (2) Words of welcome (Tanja Goenner); (3) Zero-energy Europe - We are on our way (Jean-Marie Bemtgen); (4) Polycity - Energy networks in sustainable cities An introduction (Ursula Pietzsch); (5) Energy efficient city - Successful examples in the European concerto initiative (Brigitte Bach); (6) Sustainable building and urban concepts in the Catalonian polycity project contributions to the polycity final conference 2010 (Nuria Pedrals); (7) Energy efficient buildings and renewable supply within the German polycity project (Ursula Eicker); (8) Energy efficient buildings and cities in the US (Thomas Spiegehalter); (9) Energy efficient communities - First results from an IEA collaboration project (Reinhard Jank); (10) The European energy performance of buildings directive (EPBD) - Lessons learned (Eduardo Maldonado); (11) Passive house standard in Europe - State-of-the-art and challenges (Wolfgang Feist); (12) High efficiency non-residential buildings: Concepts, implementations and experiences from the UK (Levin Lomas); (13) This is how we can save our world (Franz Alt); (14) Green buildings and renewable heating and cooling concepts in China (Yanjun Dai); (15) Sustainable urban energy solutions for Asia (Brahmanand Mohanty); (16) Description of ''Parc de l'Alba'' polygeneration system: A large-scale trigeneration system with district heating within the Spanish polycity project (Francesc Figueras Bellot); (17) Improved building automation and control systems with hardware-in-the loop solutions (Martin Becker); (18) The Italian polycity project area: Arquata (Luigi Fazari); (19) Photovoltaic system integration: In rehabilitated urban structures: Experiences and performance results from the Italian polycity project in Turin (Franco

  9. 25th Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    Borangiu, Theodor

    2017-01-01

    This book presents the proceedings of the 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 held in Belgrade, Serbia, on June 30th–July 2nd, 2016. In keeping with the tradition of the event, RAAD 2016 covered all the important areas of research and innovation in new robot designs and intelligent robot control, with papers including Intelligent robot motion control; Robot vision and sensory processing; Novel design of robot manipulators and grippers; Robot applications in manufacturing and services; Autonomous systems, humanoid and walking robots; Human–robot interaction and collaboration; Cognitive robots and emotional intelligence; Medical, human-assistive robots and prosthetic design; Robots in construction and arts, and Evolution, education, legal and social issues of robotics. For the first time in RAAD history, the themes cloud robots, legal and ethical issues in robotics as well as robots in arts were included in the technical program. The book is a valuable resource f...

  10. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  11. Towards Coordination and Control of Multi-robot Systems

    DEFF Research Database (Denmark)

    Quottrup, Michael Melholt

    This thesis focuses on control and coordination of mobile multi-robot systems (MRS). MRS can often deal with tasks that are difficult to be accomplished by a single robot. One of the challenges is the need to control, coordinate and synchronize the operation of several robots to perform some...... specified task. This calls for new strategies and methods which allow the desired system behavior to be specified in a formal and succinct way. Two different frameworks for the coordination and control of MRS have been investigated. Framework I - A network of robots is modeled as a network of multi...... a requirement specification in Computational Tree Logic (CTL) for a network of robots. The result is a set of motion plans for the robots which satisfy the specification. Framework II - A framework for controller synthesis for a single robot with respect to requirement specification in Linear-time Temporal...

  12. A New Cancer Radiotherapy System Using Multi Robotic Manipulators

    International Nuclear Information System (INIS)

    Kim, Seung Ho; Lee, Nam Ho; Lee, Byung Chul; Jeung, Kyung Min; Lee, Seong Uk; Bae, Yeong Geol; Na, Hyun Seok

    2013-01-01

    The CyberKnife system is state-of-the-art cancer treatment equipment that combines an image tracking technique, artificial intelligence software, robot technology, accelerator technology, and treatment simulation technology. The current CyberKnife System has significant shortcomings. The biggest problem is that it takes a longer time to treat a tumor. A long treatment time gives stress to patients. Furthermore it makes the patients uncomfortable with radiation and thus it is difficult to measure the exact radiation dose rate to the tumor in the processing. Linear accelerators for radiation treatment are dependent on imports, and demand high maintenance cost. This also makes the treatment cost higher and prevents the popularization of radiation. To solve the disadvantages of the existing CyberKnife, a radiation treatment robot system applied to several articulated robots is suggested. Essential element techniques for new radiotherapy robot system are investigated and some problems of similar existing systems are analyzed. This paper presents a general configuration of a new radiation robot treatment system including with a quantitative goal of the requirement techniques. This paper described a new radiotherapy robot system to track the tumor using multiple articulated robots in real time. The existing CyberKnife system using a single robot arm has disadvantages of a long radiotherapy time, high medical fee, and inaccurate measurement of the radiotherapy dose. So a new radiotherapy robot system for tumors has been proposed to solve the above problems of conventional CyberKnife systems. Necessary technologies to configure new the radiotherapy robot system have been identified. Quantitative targets of each technology have been established. Multiple robot arms are adopted to decrease the radiotherapy time. The results of this research are provided as a requisite technology for a domestic radiotherapy system and are expected to be the foundation of new technology. The

  13. A New Cancer Radiotherapy System Using Multi Robotic Manipulators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Ho; Lee, Nam Ho; Lee, Byung Chul; Jeung, Kyung Min; Lee, Seong Uk; Bae, Yeong Geol; Na, Hyun Seok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The CyberKnife system is state-of-the-art cancer treatment equipment that combines an image tracking technique, artificial intelligence software, robot technology, accelerator technology, and treatment simulation technology. The current CyberKnife System has significant shortcomings. The biggest problem is that it takes a longer time to treat a tumor. A long treatment time gives stress to patients. Furthermore it makes the patients uncomfortable with radiation and thus it is difficult to measure the exact radiation dose rate to the tumor in the processing. Linear accelerators for radiation treatment are dependent on imports, and demand high maintenance cost. This also makes the treatment cost higher and prevents the popularization of radiation. To solve the disadvantages of the existing CyberKnife, a radiation treatment robot system applied to several articulated robots is suggested. Essential element techniques for new radiotherapy robot system are investigated and some problems of similar existing systems are analyzed. This paper presents a general configuration of a new radiation robot treatment system including with a quantitative goal of the requirement techniques. This paper described a new radiotherapy robot system to track the tumor using multiple articulated robots in real time. The existing CyberKnife system using a single robot arm has disadvantages of a long radiotherapy time, high medical fee, and inaccurate measurement of the radiotherapy dose. So a new radiotherapy robot system for tumors has been proposed to solve the above problems of conventional CyberKnife systems. Necessary technologies to configure new the radiotherapy robot system have been identified. Quantitative targets of each technology have been established. Multiple robot arms are adopted to decrease the radiotherapy time. The results of this research are provided as a requisite technology for a domestic radiotherapy system and are expected to be the foundation of new technology. The

  14. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  15. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Xu, Lifei; He, Tao [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-10-15

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  16. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    International Nuclear Information System (INIS)

    Wang, Hesheng; Chen, Weidong; Xu, Lifei; He, Tao

    2015-01-01

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  17. Towards Safe Robotic Surgical Systems

    DEFF Research Database (Denmark)

    Sloth, Christoffer; Wisniewski, Rafael

    2015-01-01

    a controller for motion compensation in beating-heart surgery, and prove that it is safe, i.e., the surgical tool is kept within an allowable distance and orientation of the heart. We solve the problem by simultaneously finding a control law and a barrier function. The motion compensation system is simulated...... from several initial conditions to demonstrate that the designed control system is safe for every admissible initial condition....

  18. The SEP "Robot": A Valid Virtual Reality Robotic Simulator for the Da Vinci Surgical System?

    NARCIS (Netherlands)

    van der Meijden, O. A. J.; Broeders, I. A. M. J.; Schijven, M. P.

    2010-01-01

    The aim of the study was to determine if the concept of face and construct validity may apply to the SurgicalSim Educational Platform (SEP) "robot" simulator. The SEP robot simulator is a virtual reality (VR) simulator aiming to train users on the Da Vinci Surgical System. To determine the SEP's

  19. 11th International Symposium on Distributed Autonomous Robotic Systems

    CERN Document Server

    Chirikjian, Gregory

    2014-01-01

    Distributed robotics is a rapidly growing and maturing interdisciplinary research area lying at the intersection of computer science, network science, control theory, and electrical and mechanical engineering. The goal of the Symposium on Distributed Autonomous Robotic Systems (DARS) is to exchange and stimulate research ideas to realize advanced distributed robotic systems. This volume of proceedings includes 31 original contributions presented at the 2012 International Symposium on Distributed Autonomous Robotic Systems (DARS 2012) held in November 2012 at the Johns Hopkins University in Baltimore, MD USA. The selected papers in this volume are authored by leading researchers from Asia, Europa, and the Americas, thereby providing a broad coverage and perspective of the state-of-the-art technologies, algorithms, system architectures, and applications in distributed robotic systems. The book is organized into five parts, representative of critical long-term and emerging research thrusts in the multi-robot com...

  20. Airborne Use of Night Vision Systems

    Science.gov (United States)

    Mepham, S.

    1990-04-01

    Mission Management Department of the Royal Aerospace Establishment has won a Queen's Award for Technology, jointly with GEC Sensors, in recognition of innovation and success in the development and application of night vision technology for fixed wing aircraft. This work has been carried out to satisfy the operational needs of the Royal Air Force. These are seen to be: - Operations in the NATO Central Region - To have a night as well as a day capability - To carry out low level, high speed penetration - To attack battlefield targets, especially groups of tanks - To meet these objectives at minimum cost The most effective way to penetrate enemy defences is at low level and survivability would be greatly enhanced with a first pass attack. It is therefore most important that not only must the pilot be able to fly at low level to the target but also he must be able to detect it in sufficient time to complete a successful attack. An analysis of the average operating conditions in Central Europe during winter clearly shows that high speed low level attacks can only be made for about 20 per cent of the 24 hours. Extending this into good night conditions raises the figure to 60 per cent. Whilst it is true that this is for winter conditions and in summer the situation is better, the overall advantage to be gained is clear. If our aircraft do not have this capability the potential for the enemy to advance his troops and armour without hinderance for considerable periods is all too obvious. There are several solutions to providing such a capability. The one chosen for Tornado GR1 is to use Terrain Following Radar (TFR). This system is a complete 24 hour capability. However it has two main disadvantages, it is an active system which means it can be jammed or homed into, and is useful in attacking pre-planned targets. Second it is an expensive system which precludes fitting to other than a small number of aircraft.

  1. The development of robotic system for the nuclear power plants - A study on the manipulation of teleoperation system using redundant robot

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chung Oh; Cho, Hyung Seok; Jang, Pyung Hoon; Park, Ki Chul; Hyun, Jang Hwan; Kim, Joo Gon; Park, Young Joon; Hwang, Woong Tae; Jeon, Yong Soo; Lee, Joo Yeon; Ahn, Kyung Mo [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1996-07-01

    In this project the following 4 sub- projects have been studied for use in nuclear power plants. 1) Development of precision control method for the hydraulic and pneumatic actuators: The fuzzy gain tuner for the pneumatic servo position control system with the state feedback controller was designed= by using the professional knowledge. Through the experimental study, this control method was verified to obtain the optimal fain automatically. 2) Development of an universal master arm and force reflecting teleoperation system: An autonomous telerobot system with a vision based force reflection capability was developed. To effectly implement visual force feedback, 3 different control methods were also developed. 3) A study on the analysis and control of the redundant robot manipulator: An optimal joint-path of 8-DOF redundant KAEROT for the nozzle dam task was generated and its effectiveness and safety was verified by using graphic/animation tool. The proposed dynamic control algorithm for the redundant robot was applied to the experiment of planar 3- DOF redundant robot, showing good performance. 4) A study on the robot/user interface design: A set of final design and its console table was developed, which has metaphorical identity and user-friendly interface and a study mock-up was also developed to identify the possibility in a clear form. 33 refs., 3 tabs., 11 figs. (author)

  2. Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)

    Science.gov (United States)

    Baroff, Lynn; Dischinger, Charlie; Fitts, David

    2009-01-01

    Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.

  3. Robot system for preparing lymphocyte chromosome

    International Nuclear Information System (INIS)

    Hayata, Isamu; Furukawa, Akira; Yamamoto, Mikio; Sato, Koki; Tabuchi, Hiroyoshi; Okabe, Nobuo.

    1992-01-01

    Towards the automatization of the scoring of chromosome aberrations in radiation dosimetry with the emphasis on the improvement of biological preparations, the conventional culture and harvesting method was modified. Based on this modified method, a culture and harvest robotic system (CHROSY) for preparing lymphocyte chromosome was developed. The targeted points of the modification are as in the preparing lymphocyte chromosome was developed. The targeted points of the modification are as in the following. 1) Starting culture with purified lymphocytes in a fixed cell number. 2) Avoiding the loss of cells in changing the liquids following centrifugalization. 3) Keeping the quantity of the liquids to be applied to the treatments of cells fixed. 4) Building a system even a beginner can handle. System features are as follows. 1) Operation system: Handling robot having 5 degrees of freedom; a rotator incubator with an automatic sliding door; units for setting and removing pipette tips; a centrifuge equipped with a position adjuster and an automatic sliding door; two aluminium block baths; two nozzles as pipettes and aspirators connected to air pumps; a capping unit with a nozzle for CO 2 gas; a compressor; and an air manipulated syringe. 2) Control system; NEC PC-9801RX21 with CRT; and program written in Basic and Assembly languages on MS-DOS. It took this system 2 hours and 25 minutes to harvest 2 cultures. A fairly good chromosome slide was made from the sample harvested by CHROSY automatically. (author)

  4. Vision system for diagnostic task | Merad | Global Journal of Pure ...

    African Journals Online (AJOL)

    Due to environment degraded conditions, direct measurements are not possible. ... Degraded conditions: vibrations, water and chip of metal projections, ... Before tooling, the vision system has to answer: “is it the right piece at the right place?

  5. RoboSmith: Wireless Networked Architecture for Multiagent Robotic System

    Directory of Open Access Journals (Sweden)

    Florin Moldoveanu

    2010-11-01

    Full Text Available In this paper is presented an architecture for a flexible mini robot for a multiagent robotic system. In a multiagent system the value of an individual agent is negligible since the goal of the system is essential. Thus, the agents (robots need to be small, low cost and cooperative. RoboSmith are designed based on these conditions. The proposed architecture divide a robot into functional modules such as locomotion, control, sensors, communication, and actuation. Any mobile robot can be constructed by combining these functional modules for a specific application. An embedded software with dynamic task uploading and multi-tasking abilities is developed in order to create better interface between robots and the command center and among the robots. The dynamic task uploading allows the robots change their behaviors in runtime. The flexibility of the robots is given by facts that the robots can work in multiagent system, as master-slave, or hybrid mode, can be equipped with different modules and possibly be used in other applications such as mobile sensor networks remote sensing, and plant monitoring.

  6. Aerial robotic data acquisition system

    International Nuclear Information System (INIS)

    Hofstetter, K.J.; Hayes, D.W.; Pendergast, M.M.

    1995-01-01

    A small unmanned aerial vehicle (UAV) equipped with sensors for physical and chemical measurements of remote environments, is described. A miniature helicopter airframe is used as a platform for sensor testing and development. The sensor output is integrated with the flight control system for real-time, interactive, data acquisition and analysis. Pre programmed flight missions will be flown with several sensors to demonstrate the cost-effective surveillance capabilities of this new technology. (author) 10 refs

  7. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    Science.gov (United States)

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  8. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  9. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  10. Research on wheelchair robot control system based on EOG

    Science.gov (United States)

    Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo

    2018-04-01

    The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.

  11. A remote maintenance robot system for a pulsed nuclear reactor

    International Nuclear Information System (INIS)

    Thunborg, S.

    1987-01-01

    This paper presents a remote maintenance robot system for use in a hazardous environment. The system consists of turntable, robot and hoist subsystems which operate under the control of a supervisory computer to perform coordinated programmed maintenance operations on a pulsed nuclear reactor. The system is operational

  12. Line-feature-based calibration method of structured light plane parameters for robot hand-eye system

    Science.gov (United States)

    Qi, Yuhan; Jing, Fengshui; Tan, Min

    2013-03-01

    For monocular-structured light vision measurement, it is essential to calibrate the structured light plane parameters in addition to the camera intrinsic parameters. A line-feature-based calibration method of structured light plane parameters for a robot hand-eye system is proposed. Structured light stripes are selected as calibrating primitive elements, and the robot moves from one calibrating position to another with constraint in order that two misaligned stripe lines are generated. The images of stripe lines could then be captured by the camera fixed at the robot's end link. During calibration, the equations of two stripe lines in the camera coordinate system are calculated, and then the structured light plane could be determined. As the robot's motion may affect the effectiveness of calibration, so the robot's motion constraints are analyzed. A calibration experiment and two vision measurement experiments are implemented, and the results reveal that the calibration accuracy can meet the precision requirement of robot thick plate welding. Finally, analysis and discussion are provided to illustrate that the method has a high efficiency fit for industrial in-situ calibration.

  13. BOA: Pipe asbestos insulation removal robot system

    International Nuclear Information System (INIS)

    Schempf, H.; Bares, J.; Schnorr, W.

    1995-01-01

    The BOA system is a mobile pipe-external robotic crawler used to remotely strip and bag asbestos-containing lagging and insulation materials (ACLIM) from various diameter pipes in (primarily) industrial installations. Steam and process lines within the DOE weapons complex warrant the use of a remote device due to the high labor costs and high level of radioactive contamination, making manual removal extremely costly and highly inefficient. Currently targeted facilities for demonstration and remediation are Fernald in Ohio and Oak Ridge in Tennessee

  14. Effects of realistic force feedback in a robotic assisted minimally invasive surgery system.

    Science.gov (United States)

    Moradi Dalvand, Mohsen; Shirinzadeh, Bijan; Nahavandi, Saeid; Smith, Julian

    2014-06-01

    Robotic assisted minimally invasive surgery systems not only have the advantages of traditional laparoscopic procedures but also restore the surgeon's hand-eye coordination and improve the surgeon's precision by filtering hand tremors. Unfortunately, these benefits have come at the expense of the surgeon's ability to feel. Several research efforts have already attempted to restore this feature and study the effects of force feedback in robotic systems. The proposed methods and studies have some shortcomings. The main focus of this research is to overcome some of these limitations and to study the effects of force feedback in palpation in a more realistic fashion. A parallel robot assisted minimally invasive surgery system (PRAMiSS) with force feedback capabilities was employed to study the effects of realistic force feedback in palpation of artificial tissue samples. PRAMiSS is capable of actually measuring the tip/tissue interaction forces directly from the surgery site. Four sets of experiments using only vision feedback, only force feedback, simultaneous force and vision feedback and direct manipulation were conducted to evaluate the role of sensory feedback from sideways tip/tissue interaction forces with a scale factor of 100% in characterising tissues of varying stiffness. Twenty human subjects were involved in the experiments for at least 1440 trials. Friedman and Wilcoxon signed-rank tests were employed to statistically analyse the experimental results. Providing realistic force feedback in robotic assisted surgery systems improves the quality of tissue characterization procedures. Force feedback capability also increases the certainty of characterizing soft tissues compared with direct palpation using the lateral sides of index fingers. The force feedback capability can improve the quality of palpation and characterization of soft tissues of varying stiffness by restoring sense of touch in robotic assisted minimally invasive surgery operations.

  15. International Conference on Intelligent Robots and Systems - IROS 2011

    CERN Document Server

    Rosen, Jacob; Redundancy in Robot Manipulators and Multi-Robot Systems

    2013-01-01

    The trend in the evolution of robotic systems is that the number of degrees of freedom increases. This is visible both in robot manipulator design and in the shift of focus from single to multi-robot systems. Following the principles of evolution in nature, one may infer that adding degrees of freedom to robot systems design is beneficial. However, since nature did not select snake-like bodies for all creatures, it is reasonable to expect the presence of a certain selection pressure on the number of degrees of freedom. Thus, understanding costs and benefits of multiple degrees of freedom, especially those that create redundancy, is a fundamental problem in the field of robotics. This volume is mostly based on the works presented at the workshop on Redundancy in Robot Manipulators and Multi-Robot Systems at the IEEE/RSJ International Conference on Intelligent Robots and Systems - IROS 2011. The workshopwas envisioned as a dialog between researchers from two separate, but obviously relatedfields of robotics: on...

  16. Calibration of robotic drilling systems with a moving rail

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2014-12-01

    Full Text Available Industrial robots are widely used in aircraft assembly systems such as robotic drilling systems. It is necessary to expand a robot’s working range with a moving rail. A method for improving the position accuracy of an automated assembly system with an industrial robot mounted on a moving rail is proposed. A multi-station method is used to control the robot in this study. The robot only works at stations which are certain positions defined on the moving rail. The calibration of the robot system is composed by the calibration of the robot and the calibration of the stations. The calibration of the robot is based on error similarity and inverse distance weighted interpolation. The calibration of the stations is based on a magnetic strip and a magnetic sensor. Validation tests were performed in this study, which showed that the accuracy of the robot system gained significant improvement using the proposed method. The absolute position errors were reduced by about 85% to less than 0.3 mm compared with the maximum nearly 2 mm before calibration.

  17. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  18. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  19. A flexible, computer-integrated robotic transfer system

    International Nuclear Information System (INIS)

    Lewis, W.I. III; Taylor, R.M.

    1987-01-01

    This paper reviews a robotic system used to transport materials across a radiation control zone and into a row of shielded cells. The robot used is a five-axis GCA 600 industrial robot mounted on a 50-ft ESAB welding track. Custom software incorporates the track as the sixth axis of motion. An IBM-PC integrates robot control, force sensing, and the operator interface. Multiple end-effectors and a quick exchange mechanism are used to handle a variety of materials and tasks. Automatic error detection and recovery is a key aspect of this system

  20. Biologically Inspired Object Localization for a Modular Mobile Robotic System

    Directory of Open Access Journals (Sweden)

    Zlatogor Minchev

    2005-12-01

    Full Text Available The paper considers a general model of real biological creatures' antennae, which is practically implemented and tested, over a real element of a mobile modular robotic system - the robot MR1. The last could be utilized in solving of the most classical problem in Robotics - Object Localization. The functionality of the represented sensor system is described in a new and original manner by utilizing the tool of Generalized Nets - a new likelihood for description, modelling and simulation of different objects from the Artificial Intelligence area including Robotics.

  1. Transformers: Shape-Changing Space Systems Built with Robotic Textiles

    Science.gov (United States)

    Stoica, Adrian

    2013-01-01

    Prior approaches to transformer-like robots had only very limited success. They suffer from lack of reliability, ability to integrate large surfaces, and very modest change in overall shape. Robots can now be built from two-dimensional (2D) layers of robotic fabric. These transformers, a new kind of robotic space system, are dramatically different from current systems in at least two ways. First, the entire transformer is built from a single, thin sheet; a flexible layer of a robotic fabric (ro-fabric); or robotic textile (ro-textile). Second, the ro-textile layer is foldable to small volume and self-unfolding to adapt shape and function to mission phases.

  2. A SYSTEMIC VISION OF BIOLOGY: OVERCOMING LINEARITY

    Directory of Open Access Journals (Sweden)

    M. Mayer

    2005-07-01

    Full Text Available Many  authors have proposed  that contextualization of reality  is necessary  to teach  Biology, empha- sizing students´ social and  economic realities.   However, contextualization means  more than  this;  it is related  to working with  different kinds of phenomena  and/or objects  which enable  the  expression of scientific concepts.  Thus,  contextualization allows the integration of different contents.  Under this perspective,  the  objectives  of this  work were to articulate different  biology concepts  in order  to de- velop a systemic vision of biology; to establish  relationships with other areas of knowledge and to make concrete the  cell molecular  structure and organization as well as their  implications  on living beings´ environment, using  contextualization.  The  methodology  adopted  in this  work  was based  on three aspects:  interdisciplinarity, contextualization and development of competences,  using energy:  its flux and transformations as a thematic axis and  an approach  which allowed the  interconnection between different situations involving  these  concepts.   The  activities developed  were:  1.   dialectic exercise, involving a movement around  micro and macroscopic aspects,  by using questions  and activities,  sup- ported  by the use of alternative material  (as springs, candles on the energy, its forms, transformations and  implications  in the  biological way (microscopic  concepts;  2, Construction of molecular  models, approaching the concepts of atom,  chemical bonds and bond energy in molecules; 3. Observations de- veloped in Manguezal¨(mangrove swamp  ecosystem (Itapissuma, PE  were used to work macroscopic concepts  (as  diversity  and  classification  of plants  and  animals,  concerning  to  energy  flow through food chains and webs. A photograph register of all activities  along the course plus texts

  3. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  4. Machine vision systems using machine learning for industrial product inspection

    Science.gov (United States)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  5. Service Robotics in Healthcare: A Perspective for Information Systems Researchers?

    OpenAIRE

    Garmann-Johnsen, Niels Frederik; Mettler, Tobias; Sprenger, Michaela

    2014-01-01

    Recent advances in electronics and telecommunication have paved the way for service robots to enter the clinical world. While service robotics has long been a core research theme in computer science and other engineering-related fields, it has attracted little interest of Information Systems (IS) researchers so far. We argue that service robotics represents an interesting area of investigation, especially for healthcare, since current research lacks a thorough examination of socio-technical p...

  6. Exoskeletons, Robots and System Software: Tools for the Warfighter

    Science.gov (United States)

    2012-04-24

    Exoskeletons , Robots and System Software: Tools for the Warfighter? Paul Flanagan, Tuesday, April 24, 2012 11:15 am– 12:00 pm 1 “The views...Emerging technologies such as exoskeletons , robots , drones, and the underlying software are and will change the face of the battlefield. Warfighters will...global hub for educating, informing, and connecting Information Age leaders.” What is an exoskeleton ? An exoskeleton is a wearable robot suit that

  7. Development of robotic mobile platform with the universal chassis system

    Science.gov (United States)

    Ryadchikov, I.; Nikulchev, E.; Sechenev, S.; Drobotenko, M.; Svidlov, A.; Volkodav, P.; Feshin, A.

    2018-02-01

    The problem of stabilizing the position of mobile devices is extremely relevant at the modern level of technology development. This includes the problem of stabilizing aircraft and stabilizing the pitching of ships. In the laboratory of robotics and mechatronics of the Kuban State University, a robot is developed. The robot has additional internal degrees of freedom, responsible for compensating for deflections - the dynamic stabilization system.

  8. An approach to robot SLAM based on incremental appearance learning with omnidirectional vision

    Science.gov (United States)

    Wu, Hua; Qin, Shi-Yin

    2011-03-01

    Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.

  9. An Autonomous Robotic System for Mapping Weeds in Fields

    DEFF Research Database (Denmark)

    Hansen, Karl Damkjær; Garcia Ruiz, Francisco Jose; Kazmi, Wajahat

    2013-01-01

    The ASETA project develops theory and methods for robotic agricultural systems. In ASETA, unmanned aircraft and unmanned ground vehicles are used to automate the task of identifying and removing weeds in sugar beet fields. The framework for a working automatic robotic weeding system is presented...

  10. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  11. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  12. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging - techniques and applications.

    Science.gov (United States)

    Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V

    2014-09-01

    Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Robots and lattice automata

    CERN Document Server

    Adamatzky, Andrew

    2015-01-01

    The book gives a comprehensive overview of the state-of-the-art research and engineering in theory and application of Lattice Automata in design and control of autonomous Robots. Automata and robots share the same notional meaning. Automata (originated from the latinization of the Greek word “αυτόματον”) as self-operating autonomous machines invented from ancient years can be easily considered the first steps of robotic-like efforts. Automata are mathematical models of Robots and also they are integral parts of robotic control systems. A Lattice Automaton is a regular array or a collective of finite state machines, or automata. The Automata update their states by the same rules depending on states of their immediate neighbours. In the context of this book, Lattice Automata are used in developing modular reconfigurable robotic systems, path planning and map exploration for robots, as robot controllers, synchronisation of robot collectives, robot vision, parallel robotic actuators. All chapters are...

  14. EMBEDDED CONTROL SYSTEM FOR MOBILE ROBOTS WITH DIFFERENTIAL DRIVE

    Directory of Open Access Journals (Sweden)

    Michal KOPČÍK

    2017-09-01

    Full Text Available This article deals with design and implementation of control system for mobile robots with differential drive using embedded system. This designed embedded system consists of single control board featuring ARM based microcontroller which control the peripherals in real time and perform all low-level motion control. Designed embedded system can be easily expanded with additional sensors, actuators or control units to enhance applicability of mobile robot. Designed embedded system also features build-in communication module, which can be used for data for data acquisition and control of the mobile robot. Control board was implemented on two different types of mobile robots with differential drive, one of which was wheeled and other was tracked. These mobile robots serve as testing platform for Fault Detection and Isolation using hardware and analytical redundancy using Multisensor Data Fusion based on Kalman filters.

  15. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    Science.gov (United States)

    Hanford, Scott D.

    object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.

  16. Integrating Soft Robotics with the Robot Operating System: A Hybrid Pick and Place Arm

    Directory of Open Access Journals (Sweden)

    Ross M. McKenzie

    2017-08-01

    Full Text Available Soft robotic systems present a variety of new opportunities for solving complex problems. The use of soft robotic grippers, for example, can simplify the complexity in tasks such as the grasping of irregular and delicate objects. Adoption of soft robotics by the informatics community and industry, however, has been slow and this is, in-part, due to the amount of hardware and software that must be developed from scratch for each use of soft system components. In this paper, we detail the design, fabrication, and validation of an open-source framework that we designed to lower the barrier to entry for integrating soft robotic subsystems. This framework is built on the robot operating system (ROS, and we use it to demonstrate a modular, soft–hard hybrid system, which is capable of completing pick and place tasks. By lowering this barrier to entry through our open sourced hardware and software, we hope that system designers and Informatics researchers will find it easy to integrate soft components into their existing ROS-enabled robotic systems.

  17. RASSOR - Regolith Advanced Surface Systems Operations Robot

    Science.gov (United States)

    Gill, Tracy R.; Mueller, Rob

    2015-01-01

    The Regolith Advanced Surface Systems Operations Robot (RASSOR) is a lightweight excavator for mining in reduced gravity. RASSOR addresses the need for a lightweight (robot that is able to overcome excavation reaction forces while operating in reduced gravity environments such as the moon or Mars. A nominal mission would send RASSOR to the moon to operate for five years delivering regolith feedstock to a separate chemical plant, which extracts oxygen from the regolith using H2 reduction methods. RASSOR would make 35 trips of 20 kg loads every 24 hours. With four RASSORs operating at one time, the mission would achieve 10 tonnes of oxygen per year (8 t for rocket propellant and 2 t for life support). Accessing craters in space environments may be extremely hard and harsh due to volatile resources - survival is challenging. New technologies and methods are required. RASSOR is a product of KSC Swamp Works which establishes rapid, innovative and cost effective exploration mission solutions by leveraging partnerships across NASA, industry and academia.

  18. Controlling Underwater Robots with Electronic Nervous Systems

    Directory of Open Access Journals (Sweden)

    Joseph Ayers

    2010-01-01

    Full Text Available We are developing robot controllers based on biomimetic design principles. The goal is to realise the adaptive capabilities of the animal models in natural environments. We report feasibility studies of a hybrid architecture that instantiates a command and coordinating level with computed discrete-time map-based (DTM neuronal networks and the central pattern generators with analogue VLSI (Very Large Scale Integration electronic neuron (aVLSI networks. DTM networks are realised using neurons based on a 1-D or 2-D Map with two additional parameters that define silent, spiking and bursting regimes. Electronic neurons (ENs based on Hindmarsh–Rose (HR dynamics can be instantiated in analogue VLSI and exhibit similar behaviour to those based on discrete components. We have constructed locomotor central pattern generators (CPGs with aVLSI networks that can be modulated to select different behaviours on the basis of selective command input. The two technologies can be fused by interfacing the signals from the DTM circuits directly to the aVLSI CPGs. Using DTMs, we have been able to simulate complex sensory fusion for rheotaxic behaviour based on both hydrodynamic and optical flow senses. We will illustrate aspects of controllers for ambulatory biomimetic robots. These studies indicate that it is feasible to fabricate an electronic nervous system controller integrating both aVLSI CPGs and layered DTM exteroceptive reflexes.

  19. Folding System for the Clothes by a Robot and Tools

    OpenAIRE

    大澤, 文明; 関, 啓明; 神谷, 好承

    2004-01-01

    The works of a home robot has the laundering. The purpose of this study is to find a means of folding of the clothes and store the clothes in a drawer by a home robot. Because the shape of cloth tends to change in various ways depending on the situation, it is difficult for robot hands to fold the clothes. In this paper, we propose a realistic folding system for the clothes by a robot and tools. The function of a tool is folding the clothes in half by inserting the clothes using two plates. T...

  20. Laparoscopy-assisted Robotic Myomectomy Using the DA Vinci System

    Directory of Open Access Journals (Sweden)

    Shih-Peng Mao

    2007-06-01

    Conclusion: Minimally invasive surgery is the trend of the future. Robot-assisted laparoscopic surgery is a new technique for myomectomy. This robotic system provides a three-dimensional operative field and an easy-to-use control panel, which may be of great help when applying the suturing techniques and may shorten the learning curve. More experience with and long-term follow-up of robotic surgery may be warranted to further validate the role the robot-assisted approach in gynecologic surgery.

  1. Using insects to drive mobile robots - hybrid robots bridge the gap between biological and artificial systems.

    Science.gov (United States)

    Ando, Noriyasu; Kanzaki, Ryohei

    2017-09-01

    The use of mobile robots is an effective method of validating sensory-motor models of animals in a real environment. The well-identified insect sensory-motor systems have been the major targets for modeling. Furthermore, mobile robots implemented with such insect models attract engineers who aim to avail advantages from organisms. However, directly comparing the robots with real insects is still difficult, even if we successfully model the biological systems, because of the physical differences between them. We developed a hybrid robot to bridge the gap. This hybrid robot is an insect-controlled robot, in which a tethered male silkmoth (Bombyx mori) drives the robot in order to localize an odor source. This robot has the following three advantages: 1) from a biomimetic perspective, the robot enables us to evaluate the potential performance of future insect-mimetic robots; 2) from a biological perspective, the robot enables us to manipulate the closed-loop of an onboard insect for further understanding of its sensory-motor system; and 3) the robot enables comparison with insect models as a reference biological system. In this paper, we review the recent works regarding insect-controlled robots and discuss the significance for both engineering and biology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Toward robotic socially believable behaving systems

    CERN Document Server

    Jain, Lakhmi

    2016-01-01

    This volume is a collection of research studies on the modeling of emotions in complex autonomous systems. Several experts in the field are reporting their efforts and reviewing the literature in order to shed lights on how the processes of coding and decoding emotional states took place in humans, which are the physiological, physical, and psychological variables involved, invent new mathematical models and algorithms to describe them, and motivate these investigations in the light of observable societal changes and needs, such as the aging population and the cost of health care services. The consequences are the implementation of emotionally and socially believable machines, acting as helpers into domestic spheres, where emotions drive behaviors and actions. The contents of the book are highly multidisciplinary since the modeling of emotions in robotic socially believable systems requires a holistic perspective on topics coming from different research domains such as computer science, engineering, sociology...

  3. Direct methods for vision-based robot control : application and implementation

    NARCIS (Netherlands)

    Pieters, R.S.

    2013-01-01

    With the growing interest of integrating robotics into everyday life and industry, the requirements towards the quality and quantity of applications grows equally hard. This trend is profoundly recognized in applications involving visual perception. Whereas visual sensing in home environments tend

  4. Working on the robot society. : Visions and insights from science about the relation technology and employment.

    NARCIS (Netherlands)

    van Est, R.; Kool, L.

    2015-01-01

    The report Working on the robot society sets out current scientific findings for the relationship between technology and employment. It looks at the future and describes the policy options. In so doing, the report provides a joint fund of knowledge for societal and political debate on how the

  5. Fast Segmentation of Colour Apple Image under All-Weather Natural Conditions for Vision Recognition of Picking Robots

    Directory of Open Access Journals (Sweden)

    Wei Ji

    2016-02-01

    Full Text Available In order to resolve the poor real-time performance problem of the normalized cut (Ncut method in apple vision recognition of picking robots, a fast segmentation method of colour apple images based on the adaptive mean-shift and Ncut methods is proposed in this paper. Firstly, the traditional Ncut method based on pixels is changed into the Ncut method based on regions by the adaptive mean-shift initial segmenting. In this way, the number of peaks and edges in the image is dramatically reduced and the computation speed is improved. Secondly, the image is divided into regional maps by extracting the R-B colour feature, which not only reduces the quantity of regions, but also to some extent overcomes the effect on illumination. On this basis, every region map is expressed by a region point, so the undirected graph of the R-B colour grey-level feature is attained. Finally, regarding the undirected graph as the input of Ncut, we construct the weight matrix W by region points and determine the number of clusters based on the decision-theoretic rough set. The adaptive clustering segmentation can be implemented by an Ncut algorithm. Experimental results show that the maximum segmentation error is 3% and the average recognition time is less than 0.7s, which can meet the requirements of a real-time picking robot.

  6. Artificial intelligence and information-control systems of robots - 87

    International Nuclear Information System (INIS)

    Plander, I.

    1987-01-01

    Independent research areas of artificial intelligence represent the following problems: automatic problem solving and new knowledge discovering, automatic program synthesis, natural language, picture and scene recognition and understanding, intelligent control systems of robots equipped with sensoric subsystems, dialogue of two knowledge systems, as well as studying and modelling higher artificial intelligence attributes, such as emotionality and personality. The 4th Conference draws on the problems treated at the preceding Conferences, and presents the most recent knowledge on the following topics: theoretical problems of artificial intelligence, knowledge-based systems, expert systems, perception and pattern recognition, robotics, intelligent computer-aided design, special-purpose computer systems for artificial intelligence and robotics

  7. Mobile Robot Designed with Autonomous Navigation System

    Science.gov (United States)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  8. ASBESTOS PIPE-INSULATION REMOVAL ROBOT SYSTEM; FINAL

    International Nuclear Information System (INIS)

    Unknown

    2000-01-01

    This final topical report details the development, experimentation and field-testing activities for a robotic asbestos pipe-insulation removal robot system developed for use within the DOE's weapon complex as part of their ER and WM program, as well as in industrial abatement. The engineering development, regulatory compliance, cost-benefit and field-trial experiences gathered through this program are summarized

  9. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    Science.gov (United States)

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  10. Application of da Vinci surgical robotic system in hepatobiliary surgery

    Directory of Open Access Journals (Sweden)

    Chen Jiahai

    2018-01-01

    Full Text Available The development of minimally invasive surgery has brought a revolutionary change to surgery techniques, and endoscopic surgical robots, especially Da Vinci robotic surgical system, has further broaden the scope of minimally invasive surgery, which has been applied in a variety of surgical fields including hepatobiliary surgery. Today, the application of Da Vinci surgical robot can cover most of the operations in hepatobiliary surgery which has proved to be safe and practical. What’s more, many clinical studies in recent years have showed that Da Vinci surgical system is superior to traditional laparoscopy. This paper summarize the advantage and disadvantage of Da Vinci surgical system, and outlines the current status of and future perspectives on the robot-assisted hepatobiliary surgery based on the cases reports in recent years of the application of Da Vinci surgical robot.

  11. Beyond Speculative Robot Ethics

    NARCIS (Netherlands)

    Smits, M.; Van der Plas, A.

    2010-01-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also lead to

  12. Robotic exploration of the solar system

    CERN Document Server

    Ulivi, Paolo

    In Robotic Exploration of the Solar System, Paolo Ulivi and David Harland provide a comprehensive account of the design and managment of deep-space missions, the spacecraft involved - some flown, others not - their instruments, and their scientific results. This third volume in the series covers launches in the period 1997 to 2003 and features: - a chapter entirely devoted to the Cassini-Huygens mission to Saturn; - coverage of planetary missions of the period, including the Deep Space 1 mission and the Stardust and Hayabusa sample returns from comets and asteroids; - extensive coverage of Mars exploration, the failed 1999 missions, Mars Odyssey, Mars Express, and the twin rovers Spirit and Opportunity. The story will continue in Part 4.

  13. Behaviour based Mobile Robot Navigation Technique using AI System: Experimental Investigation on Active Media Pioneer Robot

    Directory of Open Access Journals (Sweden)

    S. Parasuraman, V.Ganapathy

    2012-10-01

    Full Text Available A key issue in the research of an autonomous robot is the design and development of the navigation technique that enables the robot to navigate in a real world environment. In this research, the issues investigated and methodologies established include (a Designing of the individual behavior and behavior rule selection using Alpha level fuzzy logic system  (b Designing of the controller, which maps the sensors input to the motor output through model based Fuzzy Logic Inference System and (c Formulation of the decision-making process by using Alpha-level fuzzy logic system. The proposed method is applied to Active Media Pioneer Robot and the results are discussed and compared with most accepted methods. This approach provides a formal methodology for representing and implementing the human expert heuristic knowledge and perception-based action in mobile robot navigation. In this approach, the operational strategies of the human expert driver are transferred via fuzzy logic to the robot navigation in the form of a set of simple conditional statements composed of linguistic variables.Keywards: Mobile robot, behavior based control, fuzzy logic, alpha level fuzzy logic, obstacle avoidance behavior and goal seek behavior

  14. Towards an automated checked baggage inspection system augmented with robots

    Science.gov (United States)

    DeDonato, Matthew P.; Dimitrov, Velin; Padır, Taskin

    2014-05-01

    We present a novel system for enhancing the efficiency and accuracy of checked baggage screening process at airports. The system requirements address the identification and retrieval of objects of interest that are prohibited in a checked luggage. The automated testbed is comprised of a Baxter research robot designed by Rethink Robotics for luggage and object manipulation, and a down-looking overhead RGB-D sensor for inspection and detection. We discuss an overview of current system implementations, areas of opportunity for improvements, robot system integration challenges, details of the proposed software architecture and experimental results from a case study for identifying various kinds of lighters in checked bags.

  15. Integrating Robot Task Planning into Off-Line Programming Systems

    DEFF Research Database (Denmark)

    Sun, Hongyan; Kroszynski, Uri

    1988-01-01

    a system architecture for integrated robot task planning. It identifies and describes the components considered necessary for implementation. The focus is on functionality of these elements as well as on the information flow. A pilot implementation of such an integrated system architecture for a robot......The addition of robot task planning in off-line programming systems aims at improving the capability of current state-of-the-art commercially available off-line programming systems, by integrating modeling, task planning, programming and simulation together under one platform. This article proposes...... assembly task is discussed....

  16. Safety assessment of a robotic system handling nuclear material

    International Nuclear Information System (INIS)

    Atcitty, C.B.; Robinson, D.G.

    1996-01-01

    This paper outlines the use of a Failure Modes and Effects Analysis for the safety assessment of a robotic system being developed at Sandia National Laboratories. The robotic system, The Weigh and Leak Check System, is to replace a manual process at the Department of Energy facility at Pantex by which nuclear material is inspected for weight and leakage. Failure Modes and Effects Analyses were completed for the robotics process to ensure that safety goals for the system had been meet. These analyses showed that the risks to people and the internal and external environment were acceptable

  17. A Motion System for Social and Animated Robots

    Directory of Open Access Journals (Sweden)

    Jelle Saldien

    2014-05-01

    Full Text Available This paper presents an innovative motion system that is used to control the motions and animations of a social robot. The social robot Probo is used to study Human-Robot Interactions (HRI, with a special focus on Robot Assisted Therapy (RAT. When used for therapy it is important that a social robot is able to create an “illusion of life” so as to become a believable character that can communicate with humans. The design of the motion system in this paper is based on insights from the animation industry. It combines operator-controlled animations with low-level autonomous reactions such as attention and emotional state. The motion system has a Combination Engine, which combines motion commands that are triggered by a human operator with motions that originate from different units of the cognitive control architecture of the robot. This results in an interactive robot that seems alive and has a certain degree of “likeability”. The Godspeed Questionnaire Series is used to evaluate the animacy and likeability of the robot in China, Romania and Belgium.

  18. A Layered Active Memory Architecture for Cognitive Vision Systems

    OpenAIRE

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  19. Reconfigurable vision system for real-time applications

    Science.gov (United States)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  20. Affordance estimation for vision-based object replacement on a humanoid robot

    DEFF Research Database (Denmark)

    Mustafa, Wail; Wächter, Mirko; Szedmak, Sandor

    2016-01-01

    In this paper, we address the problem of finding replacements of missing objects, involved in the execution of manipulation tasks. Our approach is based on estimating functional affordances for the unknown objects in order to propose replacements. We use a vision-based affordance estimation syste...

  1. An Approach for Environment Mapping and Control of Wall Follower Cellbot Through Monocular Vision and Fuzzy System

    OpenAIRE

    Farias, Karoline de M.; Rodrigues Junior, WIlson Leal; Bezerra Neto, Ranulfo P.; Rabelo, Ricardo A. L.; Santana, Andre M.

    2017-01-01

    This paper presents an approach using range measurement through homography calculation to build 2D visual occupancy grid and control the robot through monocular vision. This approach is designed for a Cellbot architecture. The robot is equipped with wall following behavior to explore the environment, which enables the robot to trail objects contours, residing in the fuzzy control the responsibility to provide commands for the correct execution of the robot movements while facing the advers...

  2. The autonomous vision system on TeamSat

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Riis, Troels

    1999-01-01

    The second qualification flight of Ariane 5 blasted off-the European Space Port in French Guiana on October 30, 1997, carrying on board a small technology demonstration satellite called TeamSat. Several experiments were proposed by various universities and research institutions in Europe and five...... of them were finally selected and integrated into TeamSat, namely FIPEX, VTS, YES, ODD and the Autonomous Vision System, AVS, a fully autonomous star tracker and vision system. This paper gives short overview of the TeamSat satellite; design, implementation and mission objectives. AVS is described in more...

  3. An advanced rehabilitation robotic system for augmenting healthcare.

    Science.gov (United States)

    Hu, John; Lim, Yi-Je; Ding, Ye; Paluska, Daniel; Solochek, Aaron; Laffery, David; Bonato, Paolo; Marchessault, Ronald

    2011-01-01

    Emerging technologies such as rehabilitation robots (RehaBot) for retraining upper and lower limb functions have shown to carry tremendous potential to improve rehabilitation outcomes. Hstar Technologies is developing a revolutionary rehabilitation robot system enhancing healthcare quality for patients with neurological and muscular injuries or functional impairments. The design of RehaBot is a safe and robust system that can be run at a rehabilitation hospital under the direct monitoring and interactive supervision control and at a remote site via telepresence operation control. RehaBot has a wearable robotic structure design like exoskeleton, which employs a unique robotic actuation--Series Elastic Actuator. These electric actuators provide robotic structural compliance, safety, flexibility, and required strength for upper extremity dexterous manipulation rehabilitation training. RehaBot also features a novel non-treadmill paddle platform capable of haptics feedback locomotion rehabilitation training. In this paper, we concern mainly about the motor incomplete patient and rehabilitation applications.

  4. Calibration technology in application of robot-laser scanning system

    Science.gov (United States)

    Ren, YongJie; Yin, ShiBin; Zhu, JiGui

    2012-11-01

    A system composed of laser sensor and 6-DOF industrial robot is proposed to obtain complete three-dimensional (3-D) information of the object surface. Suitable for the different combining ways of laser sensor and robot, a new method to calibrate the position and pose between sensor and robot is presented. By using a standard sphere with known radius as a reference tool, the rotation and translation matrices between the laser sensor and robot are computed, respectively in two steps, so that many unstable factors introduced in conventional optimization methods can be avoided. The experimental results show that the accuracy of the proposed calibration method can be achieved up to 0.062 mm. The calibration method is also implemented into the automated robot scanning system to reconstruct a car door panel.

  5. Vision based persistent localization of a humanoid robot for locomotion tasks

    Directory of Open Access Journals (Sweden)

    Martínez Pablo A.

    2016-09-01

    Full Text Available Typical monocular localization schemes involve a search for matches between reprojected 3D world points and 2D image features in order to estimate the absolute scale transformation between the camera and the world. Successfully calculating such transformation implies the existence of a good number of 3D points uniformly distributed as reprojected pixels around the image plane. This paper presents a method to control the march of a humanoid robot towards directions that are favorable for visual based localization. To this end, orthogonal diagonalization is performed on the covariance matrices of both sets of 3D world points and their 2D image reprojections. Experiments with the NAO humanoid platform show that our method provides persistence of localization, as the robot tends to walk towards directions that are desirable for successful localization. Additional tests demonstrate how the proposed approach can be incorporated into a control scheme that considers reaching a target position.

  6. An approach to software quality assurance for robotic inspection systems

    International Nuclear Information System (INIS)

    Kiebel, G.R.

    1993-10-01

    Software quality assurance (SQA) for robotic systems used in nuclear waste applications is vital to ensure that the systems operate safely and reliably and pose a minimum risk to humans and the environment. This paper describes the SQA approach for the control and data acquisition system for a robotic system being developed for remote surveillance and inspection of underground storage tanks (UST) at the Hanford Site

  7. Vision and Task Assistance using Modular Wireless In Vivo Surgical Robots

    Science.gov (United States)

    Platt, Stephen R.; Hawks, Jeff A.; Rentschler, Mark E.

    2009-01-01

    Minimally invasive abdominal surgery (laparoscopy) results in superior patient outcomes compared to conventional open surgery. However, the difficulty of manipulating traditional laparoscopic tools from outside the body of the patient generally limits these benefits to patients undergoing relatively low complexity procedures. The use of tools that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Our previous work demonstrated that miniature mobile and fixed-based in vivo robots using tethers for power and data transmission can successfully operate within the abdominal cavity. This paper describes the development of a modular wireless mobile platform for in vivo sensing and manipulation applications. Design details and results of ex vivo and in vivo tests of robots with biopsy grasper, staple/clamp, video, and physiological sensor payloads are presented. These types of self-contained surgical devices are significantly more transportable and lower in cost than current robotic surgical assistants. They could ultimately be carried and deployed by non-medical personnel at the site of an injury to allow a remotely located surgeon to provide critical first response medical intervention irrespective of the location of the patient. PMID:19237337

  8. Vision and task assistance using modular wireless in vivo surgical robots.

    Science.gov (United States)

    Platt, Stephen R; Hawks, Jeff A; Rentschler, Mark E

    2009-06-01

    Minimally invasive abdominal surgery (laparoscopy) results in superior patient outcomes compared to conventional open surgery. However, the difficulty of manipulating traditional laparoscopic tools from outside the body of the patient generally limits these benefits to patients undergoing relatively low complexity procedures. The use of tools that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Our previous work demonstrated that miniature mobile and fixed-based in vivo robots using tethers for power and data transmission can successfully operate within the abdominal cavity. This paper describes the development of a modular wireless mobile platform for in vivo sensing and manipulation applications. Design details and results of ex vivo and in vivo tests of robots with biopsy grasper, staple/clamp, video, and physiological sensor payloads are presented. These types of self-contained surgical devices are significantly more transportable and lower in cost than current robotic surgical assistants. They could ultimately be carried and deployed by nonmedical personnel at the site of an injury to allow a remotely located surgeon to provide critical first response medical intervention irrespective of the location of the patient.

  9. Robotics

    Energy Technology Data Exchange (ETDEWEB)

    Lorino, P; Altwegg, J M

    1985-05-01

    This article, which is aimed at the general reader, examines latest developments in, and the role of, modern robotics. The 7 main sections are sub-divided into 27 papers presented by 30 authors. The sections are as follows: 1) The role of robotics, 2) Robotics in the business world and what it can offer, 3) Study and development, 4) Utilisation, 5) Wages, 6) Conditions for success, and 7) Technological dynamics.

  10. An architecture for robotic system integration

    International Nuclear Information System (INIS)

    Butler, P.L.; Reister, D.B.; Gourley, C.S.; Thayer, S.M.

    1993-01-01

    An architecture has been developed to provide an object-oriented framework for the integration of multiple robotic subsystems into a single integrated system. By using an object-oriented approach, all subsystems can interface with each other, and still be able to be customized for specific subsystem interface needs. The object-oriented framework allows the communications between subsystems to be hidden from the interface specification itself. Thus, system designers can concentrate on what the subsystems are to do, not how to communicate. This system has been developed for the Environmental Restoration and Waste Management Decontamination and Decommissioning Project at Oak Ridge National Laboratory. In this system, multiple subsystems are defined to separate the functional units of the integrated system. For example, a Human-Machine Interface (HMI) subsystem handles the high-level machine coordination and subsystem status display. The HMI also provides status-logging facilities and safety facilities for use by the remaining subsystems. Other subsystems have been developed to provide specific functionality, and many of these can be reused by other projects

  11. "Excuse me, where's the registration desk?" Report on Integrating Systems for the Robot Challenge AAAI 2002

    National Research Council Canada - National Science Library

    Perzanowski, Dennis; Schultz, Alan C; Adams, William; Bugajska, Magda; Abramson, M; MacMahon, M; Atrash, A; Coblenz, M

    2002-01-01

    ...; register for the conference; and then give a talk. Issues regarding human/robot interaction and interfaces, navigation, mobility, vision, to name but a few relevant technologies to achieve such a task, were put to the test...

  12. Essential technologies for developing human and robot collaborative system

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1997-10-01

    In this study, we aim to develop a concept of new robot system, i.e., 'human and robot collaborative system', for the patrol of nuclear power plants. This paper deals with the two essential technologies developed for the system. One is the autonomous navigation program with human intervention function which is indispensable for human and robot collaboration. The other is the position estimation method by using gyroscope and TV image to make the estimation accuracy much higher for safe navigation. Feasibility of the position estimation method is evaluated by experiment and numerical simulation. (author)

  13. An off-line programming system for palletizing robot

    Directory of Open Access Journals (Sweden)

    Youdong Chen

    2016-09-01

    Full Text Available Off-line programming systems are essential tools for the effective use of palletizing robots. This article presents a dedicated off-line programming system for palletizing robots. According to the user practical requirements, there are many user-defined patterns that can’t be easily generated by commercial off-line robot programming systems. This study suggests a pattern generation method that users can easily define their patterns. The proposed method has been simulation and experiment. The results have attested the effectiveness of the proposed pattern generation method.

  14. Acquisition And Processing Of Range Data Using A Laser Scanner-Based 3-D Vision System

    Science.gov (United States)

    Moring, I.; Ailisto, H.; Heikkinen, T.; Kilpela, A.; Myllyla, R.; Pietikainen, M.

    1988-02-01

    In our paper we describe a 3-D vision system designed and constructed at the Technical Research Centre of Finland in co-operation with the University of Oulu. The main application fields our 3-D vision system was developed for are geometric measurements of large objects and manipulator and robot control tasks. It seems to be potential in automatic vehicle guidance applications, too. The system has now been operative for about one year and its performance has been extensively tested. Recently we have started a field test phase to evaluate its performance in real industrial tasks and environments. The system consists of three main units: the range finder, the scanner and the computer. The range finder is based on the direct measurement of the time-of-flight of a laser pulse. The time-interval between the transmitted and the received light pulses is converted into a continuous analog voltage, which is amplified, filtered and offset-corrected to produce the range information. The scanner consists of two mirrors driven by moving iron galvanometers. This system is controlled by servo amplifiers. The computer unit controls the scanner, transforms the measured coordinates into a cartesian coordinate system and serves as a user interface and postprocessing environment. Methods for segmenting the range image into a higher level description have been developed. The description consists of planar and curved surfaces and their features and relations. Parametric surface representations based on the Ferguson surface patch are studied, too.

  15. Development of a remote tank inspection robotic system

    International Nuclear Information System (INIS)

    Knape, B.P.; Bares, L.C.

    1990-01-01

    RedZone Robotics is currently developing a remote tank inspection (RTI) robotic system for Westinghouse Idaho Nuclear Company (WINCO). WINCO intends to use the RTI robotic system at the Idaho Chemical Processing Plant, a facility that contains a tank farm of several 1,135,500-ell (300,000-gal), 15.2-m (50-ft)-diam, high-level liquid waste storage tanks. The primary purpose of the RTI robotic system is to inspect the interior of these tanks for corrosion that may have been caused by the combined effects of radiation, high temperature, and caustic by the combined effects of radiation, high temperature, and caustic chemicals present inside the tanks. The RTI robotic system features a vertical deployment unit, a robotic arm, and a remote control console and computer [located up to 30.5 m (100 ft) away from the tank site]. All actuators are high torque, electric dc brush motors that are servocontrolled with absolute position feedback. The control system uses RedZone's standardized intelligent controller for enhanced telerobotics, which provides a high speed, multitasking environment on a VME bus. Currently, the robot is controlled in a manual, job-button, control mode; however, control capability is available to develop preprogrammed, automated modes of operation

  16. A bio-inspired electrocommunication system for small underwater robots.

    Science.gov (United States)

    Wang, Wei; Liu, Jindong; Xie, Guangming; Wen, Li; Zhang, Jianwei

    2017-03-29

    Weakly electric fishes (Gymnotid and Mormyrid) use an electric field to communicate efficiently (termed electrocommunication) in the turbid waters of confined spaces where other communication modalities fail. Inspired by this biological phenomenon, we design an artificial electrocommunication system for small underwater robots and explore the capabilities of such an underwater robotic communication system. An analytical model for electrocommunication is derived to predict the effect of the key parameters such as electrode distance and emitter current of the system on the communication performance. According to this model, a low-dissipation, and small-sized electrocommunication system is proposed and integrated into a small robotic fish. We characterize the communication performance of the robot in still water, flowing water, water with obstacles and natural water conditions. The results show that underwater robots are able to communicate electrically at a speed of around 1 k baud within about 3 m with a low power consumption (less than 1 W). In addition, we demonstrate that two leader-follower robots successfully achieve motion synchronization through electrocommunication in the three-dimensional underwater space, indicating that this bio-inspired electrocommunication system is a promising setup for the interaction of small underwater robots.

  17. System and method for seamless task-directed autonomy for robots

    Science.gov (United States)

    Nielsen, Curtis; Bruemmer, David; Few, Douglas; Walton, Miles

    2012-09-18

    Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task for the robot to achieve. A control intermediary links task designators with robot instructions issued to the robot. The control intermediary analyzes a relative position between the task designators and the robot. The control intermediary uses the analysis to determine a task-oriented autonomy level for the robot and communicates target achievement information to the robot. The target achievement information may include instructions for directly guiding the robot if the task-oriented autonomy level indicates low robot initiative and may include instructions for directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level indicates high robot initiative.

  18. Autonomous mobile robotic system for supporting counterterrorist and surveillance operations

    Science.gov (United States)

    Adamczyk, Marek; Bulandra, Kazimierz; Moczulski, Wojciech

    2017-10-01

    Contemporary research on mobile robots concerns applications to counterterrorist and surveillance operations. The goal is to develop systems that are capable of supporting the police and special forces by carrying out such operations. The paper deals with a dedicated robotic system for surveillance of large objects such as airports, factories, military bases, and many others. The goal is to trace unauthorised persons who try to enter to the guarded area, document the intrusion and report it to the surveillance centre, and then warn the intruder by sound messages and eventually subdue him/her by stunning through acoustic effect of great power. The system consists of several parts. An armoured four-wheeled robot assures required mobility of the system. The robot is equipped with a set of sensors including 3D mapping system, IR and video cameras, and microphones. It communicates with the central control station (CCS) by means of a wideband wireless encrypted system. A control system of the robot can operate autonomously, and under remote control. In the autonomous mode the robot follows the path planned by the CCS. Once an intruder has been detected, the robot can adopt its plan to allow tracking him/her. Furthermore, special procedures of treatment of the intruder are applied including warning about the breach of the border of the protected area, and incapacitation of an appropriately selected very loud sound until a patrol of guards arrives. Once getting stuck the robot can contact the operator who can remotely solve the problem the robot is faced with.

  19. Distributed Autonomous Robotic Systems : the 12th International Symposium

    CERN Document Server

    Cho, Young-Jo

    2016-01-01

    This volume of proceedings includes 32 original contributions presented at the 12th International Symposium on Distributed Autonomous Robotic Systems (DARS 2014), held in November 2014. The selected papers in this volume are authored by leading researchers from Asia, Europe, and the Americas, thereby providing a broad coverage and perspective of the state-of-the-art technologies, algorithms, system architectures, and applications in distributed robotic systems. .

  20. Design on a Composite Mobile System for Exploration Robot

    OpenAIRE

    Shang, Weiyan; Yang, Canjun; Liu, Yunping; Wang, Junming

    2016-01-01

    In order to accomplish exploration missions in complex environments, a new type of robot has been designed. By analyzing the characteristics of typical moving systems, a new mobile system which is named wheel-tracked moving system (WTMS) has been presented. Then by virtual prototype simulation, the new system’s ability to adapt complex environments has been verified. As the curve of centroid acceleration changes in large amplitude in this simulation, ride performance of this robot has been st...

  1. Enhanced operator perception through 3D vision and haptic feedback

    Science.gov (United States)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  2. Assessment of Vision-Based Target Detection and Classification Solutions Using an Indoor Aerial Robot

    Science.gov (United States)

    2014-09-01

    revolutions per minute SIFT Scale-Invariant Transform Feature SURF Speeded Up Robust Features SWAP size, weight and power TAMD threat air and missile defense...domain. The naming convention for all functions within this domain is prefix “plan_.” • Logic: Logic acts as a switch to enable and disable certain...publication, in 2006 [44]. Some other feature detectors/descriptors available in computer vision are 28 Speeded Up Robust Features ( SURF ) [4] and Scale

  3. Visual perception system and method for a humanoid robot

    Science.gov (United States)

    Wells, James W. (Inventor); Mc Kay, Neil David (Inventor); Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  4. System Design and Locomotion of Superball, an Untethered Tensegrity Robot

    Science.gov (United States)

    Sabelhaus, Andrew P.; Bruce, Jonathan; Caluwaerts, Ken; Manovi, Pavlo; Firoozi, Roya Fallah; Dobi, Sarah; Agogino, Alice M.; Sunspiral, Vytas

    2015-01-01

    The Spherical Underactuated Planetary Exploration Robot ball (SUPERball) is an ongoing project within NASA Ames Research Center's Intelligent Robotics Group and the Dynamic Tensegrity Robotics Lab (DTRL). The current SUPERball is the first full prototype of this tensegrity robot platform, eventually destined for space exploration missions. This work, building on prior published discussions of individual components, presents the fully-constructed robot. Various design improvements are discussed, as well as testing results of the sensors and actuators that illustrate system performance. Basic low-level motor position controls are implemented and validated against sensor data, which show SUPERball to be uniquely suited for highly dynamic state trajectory tracking. Finally, SUPERball is shown in a simple example of locomotion. This implementation of a basic motion primitive shows SUPERball in untethered control.

  5. Interactive robot control system and method of use

    Science.gov (United States)

    Sanders, Adam M. (Inventor); Reiland, Matthew J. (Inventor); Abdallah, Muhammad E. (Inventor); Linn, Douglas Martin (Inventor); Platt, Robert (Inventor)

    2012-01-01

    A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.

  6. A Novel Docking System for Modular Self-Reconfigurable Robots

    Directory of Open Access Journals (Sweden)

    Tan Zhang

    2017-10-01

    Full Text Available Existing self-reconfigurable robots achieve connections and disconnections by a separate drive of the docking system. In this paper, we present a new docking system with which the connections and disconnections are driven by locomotion actuators, without the need for a separate drive, which reduces the weight and the complexity of the modules. This self-reconfigurable robot consists of two types of fundamental modules, i.e., active and passive modules. By the docking system, two types of connections are formed with the fundamental modules, and the docking and undocking actions are achieved through simple control with less sensory feedback. This paper describes the design of the robotic modules, the docking system, the docking process, and the docking force analysis. An experiment is performed to demonstrate the self-reconfigurable robot with the docking system.

  7. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  8. Modular robotic system for forensic investigation support

    Science.gov (United States)

    Kowalski, Grzegorz; Główka, Jakub; Maciaś, Mateusz; Puchalski, Sławomir

    2017-10-01

    Forensic investigation on the crime scene is an activity that requires not only knowledge about the ways of searching for evidence, collecting and processing them. In some cases the area of operation might not be properly secured and poses threat to human health or life. Some devices or materials may be left intentionally or not to injure potential investigators. Besides conventional explosives, threats can be in form of CBRN materials, which have not only immediate effect on the exposed personnel, but can contaminate further people, when being transferred for example on clothes or unsecured equipment. In this case a risk evaluation should be performed, that can lead to conclusions that it is too dangerous for investigators to work. In that kind of situation remote devices, which are able to examine the crime scene and secure samples, can be used. In the course of R&D activities PIAP developed a system, which is based on small UGV capable of carrying out inspection of suspicious places and securing evidence, when needed. The system consists of remotely controlled mobile robot, its control console and a set of various inspection and support tools, that enable detection of CBRN threats as well as revelation, documentation and securing of the evidence. This paper will present main features of the system, like mission adjustment possibilities and communication aspects, and also examples of the forensic accessories.

  9. Ideas on a system design for end-user robots

    Science.gov (United States)

    Bonasso, R. P.; Slack, Marc G.

    1992-11-01

    Robots are being used successfully in factory automation; however, recently there has been some success in building robots which can operate in field environments, where the domain is less predictable. New perception and control techniques have been developed which allow a robot to accomplish its mission while dealing with natural changes in both land and underwater environments. Unfortunately, efforts in this area have resulted in many one-of-a-kind robots, limited to research laboratories or carefully delimited field task arenas. A user who would like to apply robotic technology to a particular field problem must basically start from scratch. The problem is that the robotic technology (i.e., the hardware and software) which might apply to the user's domain exists in a diverse array of formats and configurations. For end-user robots to become a reality, an effort to standardize some aspects of the robotic technology must be made, in much the same way that personal computer technology is becoming standardized. Presently, a person can buy a computer and then acquire hardware and software extensions which simply `plug in' and provide the user with the required utility without the user having to understand the inner workings of the pieces of the system. This technology even employs standardized interface specifications so the user is presented with a familiar interaction paradigm. This paper outlines some system requirements (hardware and software) and a preliminary design for end-user robots for field environments, drawing parallels to the trends in the personal computer market. The general conclusion is that the appropriate components as well as an integrating architecture are already available, making development of out-of-the- box, turnkey robots for a certain range of commonly required tasks a potential reality.

  10. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  11. Vision Aided State Estimation for Helicopter Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten; Bendtsen, Jan Dimon; la Cour-Harbo, Anders

    2007-01-01

    This paper presents the design and verification of a state estimator for a helicopter based slung load system. The estimator is designed to augment the IMU driven estimator found in many helicopter UAV s and uses vision based updates only. The process model used for the estimator is a simple 4...

  12. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  13. Development of a medical robot system for minimally invasive surgery.

    Science.gov (United States)

    Feng, Mei; Fu, Yili; Pan, Bo; Liu, Chang

    2012-03-01

    Robot-assisted systems have been widely used in minimally invasive surgery (MIS) practice, and with them the precision and accuracy of surgical procedures can be significantly improved. Promoting the development of robot technology in MIS will improve robot performance and help in tackling problems from complex surgical procedures. A medical robot system with a new mechanism for MIS was proposed to achieve a two-dimensional (2D) remote centre of motion (RCM). An improved surgical instrument was designed to enhance manipulability and eliminate the coupling motion between the wrist and the grippers. The control subsystem adopted a master-slave control mode, upon which a new method with error compensation of repetitive feedback can be based for the inverse kinematics solution. A unique solution with less computation and higher satisfactory accuracy was also obtained. Tremor filtration and trajectory planning were also addressed with regard to the smoothness of the surgical instrument movement. The robot system was tested on pigs weighing 30-45 kg. The experimental results show that the robot can successfully complete a cholecystectomy and meet the demands of MIS. The results of the animal experiments were excellent, indicating a promising clinical application of the robot with high manipulability. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Progress in EEG-Based Brain Robot Interaction Systems

    Directory of Open Access Journals (Sweden)

    Xiaoqian Mao

    2017-01-01

    Full Text Available The most popular noninvasive Brain Robot Interaction (BRI technology uses the electroencephalogram- (EEG- based Brain Computer Interface (BCI, to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques.

  15. Walking Robots Dynamic Control Systems on an Uneven Terrain

    Directory of Open Access Journals (Sweden)

    MUNTEANU, M. S.

    2010-05-01

    Full Text Available The paper presents ZPM dynamic control of walking robots, developing an open architecture real time control multiprocessor system, in view of obtaining new capabilities for walking robots. The complexity of the movement mechanism of a walking robot was taken into account, being a repetitive tilting process with numerous instable movements and which can lead to its turnover on an uneven terrain. The control system architecture for the dynamic robot walking is presented in correlation with the control strategy which contains three main real time control loops: balance robot control using sensorial feedback, walking diagram control with periodic changes depending on the sensorial information during each walk cycle, predictable movement control based on a quick decision from the previous experimental data. The results obtained through simulation and experiments show an increase in mobility, stability in real conditions and obtaining of high performances related to the possibility of moving walking robots on terrains with a configuration as close as possible to real situations, respectively developing new technological capabilities of the walking robot control systems for slope movement and walking by overtaking or going around obstacles.

  16. ROS (Robot Operating System) für Automotive

    OpenAIRE

    Bubeck, Alexander

    2014-01-01

    - Introduction into the Robot Operating System - Open Source in the automotive industries - Application of ROS in the automotive industry - ROS navigation - ROS with real time control - ROS in the embedded world - Outlook: ROS 2.0 - Summary

  17. Human Robotic Systems (HRS): Robonaut 2 Technologies Element

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the Robonaut 2 (R2) Technology Project Element within Human Robotic Systems (HRS) is to developed advanced technologies for infusion into the Robonaut 2...

  18. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    Science.gov (United States)

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  19. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  20. Multi-arm multilateral haptics-based immersive tele-robotic system (HITS) for improvised explosive device disposal

    Science.gov (United States)

    Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir

    2014-06-01

    This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.

  1. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    Science.gov (United States)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  2. The SEP "robot": a valid virtual reality robotic simulator for the Da Vinci Surgical System?

    Science.gov (United States)

    van der Meijden, O A J; Broeders, I A M J; Schijven, M P

    2010-04-01

    The aim of the study was to determine if the concept of face and construct validity may apply to the SurgicalSim Educational Platform (SEP) "robot" simulator. The SEP robot simulator is a virtual reality (VR) simulator aiming to train users on the Da Vinci Surgical System. To determine the SEP's face validity, two questionnaires were constructed. First, a questionnaire was sent to users of the Da Vinci system (reference group) to determine a focused user-group opinion and their recommendations concerning VR-based training applications for robotic surgery. Next, clinical specialists were requested to complete a pre-tested face validity questionnaire after performing a suturing task on the SEP robot simulator. To determine the SEP's construct validity, outcome parameters of the suturing task were compared, for example, relative to participants' endoscopic experience. Correlations between endoscopic experience and outcome parameters of the performed suturing task were tested for significance. On an ordinal five-point, scale the average score for the quality of the simulator software was 3.4; for its hardware, 3.0. Over 80% agreed that it is important to train surgeons and surgical trainees to use the Da Vinci. There was a significant but marginal difference in tool tip trajectory (p = 0.050) and a nonsignificant difference in total procedure time (p = 0.138) in favor of the experienced group. In conclusion, the results of this study reflect a uniform positive opinion using VR training in robotic surgery. Concepts of face and construct validity of the SEP robotic simulator are present; however, these are not strong and need to be improved before implementation of the SEP robotic simulator in its present state for a validated training curriculum to be successful .

  3. Accurate Localization of Communicant Vehicles using GPS and Vision Systems

    Directory of Open Access Journals (Sweden)

    Georges CHALLITA

    2009-07-01

    Full Text Available The new generation of ADAS systems based on cooperation between vehicles can offer serious perspectives to the road security. The inter-vehicle cooperation is made possible thanks to the revolution in the wireless mobile ad hoc network. In this paper, we will develop a system that will minimize the imprecision of the GPS used to car tracking, based on the data given by the GPS which means the coordinates and speed in addition to the use of the vision data that will be collected from the loading system in the vehicle (camera and processor. Localization information can be exchanged between the vehicles through a wireless communication device. The creation of the system must adopt the Monte Carlo Method or what we call a particle filter for the treatment of the GPS data and vision data. An experimental study of this system is performed on our fleet of experimental communicating vehicles.

  4. Monitoring system of multiple fire fighting based on computer vision

    Science.gov (United States)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  5. WALS: A sensor-based robotic system for handling nuclear materials

    International Nuclear Information System (INIS)

    Drotning, W.; Kimberly, H.; Wapman, W.

    1997-01-01

    An automated system is being developed for handling large payloads of radioactive nuclear materials in an analytical laboratory. The system uses machine vision and force/torque sensing to provide sensor-based control of the automation system to enhance system safety, flexibility, and robustness and achieve easy remote operation. The automation system also controls the operation of the laboratory measurement systems and the coordination of them with the robotic system. Particular attention has been given to system design features and analytical methods that provide an enhanced level of operational safety. Independent mechanical gripper interlock and too release mechanisms were designed to prevent payload mishandling. An extensive failure modes and effects analysis (FMEA) of the automation system was developed as a safety design analysis tool

  6. Dynamic electronic institutions in agent oriented cloud robotic systems.

    Science.gov (United States)

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  7. A Ground-Based Validation System of Teleoperation for a Space Robot

    Directory of Open Access Journals (Sweden)

    Xueqian Wang

    2012-10-01

    Full Text Available Teleoperation of space robots is very important for future on-orbit service. In order to assure the task is accomplished successfully, ground experiments are required to verify the function and validity of the teleoperation system before a space robot is launched. In this paper, a ground-based validation subsystem is developed as a part of a teleoperation system. The subsystem is mainly composed of four parts: the input verification module, the onboard verification module, the dynamic and image workstation, and the communication simulator. The input verification module, consisting of hardware and software of the master, is used to verify the input ability. The onboard verification module, consisting of the same hardware and software as the onboard processor, is used to verify the processor's computing ability and execution schedule. In addition, the dynamic and image workstation calculates the dynamic response of the space robot and target, and generates emulated camera images, including the hand-eye cameras, global-vision camera and rendezvous camera. The communication simulator provides fidelity communication conditions, i.e., time delays and communication bandwidth. Lastly, we integrated a teleoperation system and conducted many experiments on the system. Experiment results show that the ground system is very useful for verified teleoperation technology.

  8. SpaceWire- Based Control System Architecture for the Lightweight Advanced Robotic Arm Demonstrator [LARAD

    Science.gov (United States)

    Rucinski, Marek; Coates, Adam; Montano, Giuseppe; Allouis, Elie; Jameux, David

    2015-09-01

    The Lightweight Advanced Robotic Arm Demonstrator (LARAD) is a state-of-the-art, two-meter long robotic arm for planetary surface exploration currently being developed by a UK consortium led by Airbus Defence and Space Ltd under contract to the UK Space Agency (CREST-2 programme). LARAD has a modular design, which allows for experimentation with different electronics and control software. The control system architecture includes the on-board computer, control software and firmware, and the communication infrastructure (e.g. data links, switches) connecting on-board computer(s), sensors, actuators and the end-effector. The purpose of the control system is to operate the arm according to pre-defined performance requirements, monitoring its behaviour in real-time and performing safing/recovery actions in case of faults. This paper reports on the results of a recent study about the feasibility of the development and integration of a novel control system architecture for LARAD fully based on the SpaceWire protocol. The current control system architecture is based on the combination of two communication protocols, Ethernet and CAN. The new SpaceWire-based control system will allow for improved monitoring and telecommanding performance thanks to higher communication data rate, allowing for the adoption of advanced control schemes, potentially based on multiple vision sensors, and for the handling of sophisticated end-effectors that require fine control, such as science payloads or robotic hands.

  9. Intelligent vision system for autonomous vehicle operations

    Science.gov (United States)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  10. Robust adaptive optics systems for vision science

    Science.gov (United States)

    Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.

    2018-02-01

    Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.

  11. Motion and operation planning of robotic systems background and practical approaches

    CERN Document Server

    Gomez-Barvo, Fernando

    2015-01-01

    This book addresses the broad multi-disciplinary topic of robotics, and presents the basic techniques for motion and operation planning in robotics systems. Gathering contributions from experts in diverse and wide ranging fields, it offers an overview of the most recent and cutting-edge practical applications of these methodologies. It covers both theoretical and practical approaches, and elucidates the transition from theory to implementation. An extensive analysis is provided, including humanoids, manipulators, aerial robots and ground mobile robots. ‘Motion and Operation Planning of Robotic Systems’ addresses the following topics: *The theoretical background of robotics. *Application of motion planning techniques to manipulators, such as serial and parallel manipulators. *Mobile robots planning, including robotic applications related to aerial robots, large scale robots and traditional wheeled robots. *Motion planning for humanoid robots. An invaluable reference text for graduate students and researche...

  12. Mobile robots and remote systems in nuclear applications; Robots moviles y sistemas remotos en aplicaciones nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Segovia de los Rios, J. A.; Benitez R, J. S., E-mail: armando.segovia@inin.gob.m [ININ, Departamento de Automatizacion e Instrumentacion, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2010-07-01

    Traditionally, the robots have been used in the industry for the colored to the spray, welding, schemed, assemble and handling of materials. However, these devices have had a deep impact in the nuclear industry where the first objective has been to reduce the exhibition and the personnel contact with radioactive materials. Knowing the utility of the mobile robots and remote systems in nuclear facilities in the world, the Department of Automation and Instrumentation of the Instituto Nacional de Investigaciones Nucleares (ININ) has carried out some researches and applications that they have facilitated the work of the researches and professionals of the ININ involved in the handling of radioactive materials, as the system with monorail for the introduction of irradiated materials in a production cell of Iodine-131 and the robot vehicle for the radioactive materials transport TRASMAR (contraction of Transportacion Asistida de Materiales Radiactivos). (Author)

  13. Effective programming of energy consuming industrial robot systems

    International Nuclear Information System (INIS)

    Trnka, K.; Pinter, T.; Knazik, M.; Bozek, P.

    2012-01-01

    This paper discusses the problem of effective motion planning for industrial robots. The first part dealt with current method for off-line motion planning. In the second part is presented the work done with one of the simulation system with automatic trajectory generation and off-line programming capability [4]. An spot welding process is involved. The practical application of this step strongly depends on the method for robot path optimization with high accuracy, thus, transform the path into a time and energy optimal robot program for the real world, which is discussed in the third step. (Authors)

  14. Automation and Robotics for Space-Based Systems, 1991

    Science.gov (United States)

    Williams, Robert L., II (Editor)

    1992-01-01

    The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.

  15. A new CT-aided robotic stereotaxis system

    International Nuclear Information System (INIS)

    Shao, H.M.; Chen, J.Y.; Truong, T.K.; Reed, I.S.

    1985-01-01

    In this paper, it is shown that a robot arm may be programmed to replace the stereotaxic frame for trajectory guidance. Since the robot is driven by a computer, it offers substantial flexibility, speed and accuracy advantages over the frame. It allows a surgeon to conveniently manipulate the probe trajectory in a variety of possible directions. As a consequence, even more sophisticated stereotaxic procedures are now possible. An experimental robotic stereotaxic system is now in operation. It is described in detail in this paper

  16. A machine vision system for the calibration of digital thermometers

    International Nuclear Information System (INIS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Alvarez-Valado, Victor; Martín, Fernando; Formella, Arno

    2009-01-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians

  17. Laws on Robots, Laws by Robots, Laws in Robots : Regulating Robot Behaviour by Design

    NARCIS (Netherlands)

    Leenes, R.E.; Lucivero, F.

    2015-01-01

    Speculation about robot morality is almost as old as the concept of a robot itself. Asimov’s three laws of robotics provide an early and well-discussed example of moral rules robots should observe. Despite the widespread influence of the three laws of robotics and their role in shaping visions of

  18. Development of 6-DOF painting robot control system

    Science.gov (United States)

    Huang, Junbiao; Liu, Jianqun; Gao, Weiqiang

    2017-01-01

    With the development of society, the spraying technology of manufacturing industry in China has changed from the manual operation to the 6-DOF (Degree Of Freedom)robot automatic spraying. Spraying painting robot can not only complete the work which does harm to human being, but also improve the production efficiency and save labor costs. Control system is the most critical part of the 6-DOF robots, however, there is still a lack of relevant technology research in China. It is very necessary to study a kind of control system of 6-DOF spraying painting robots which is easy to operation, and has high efficiency and stable performance. With Googol controller platform, this paper develops programs based on Windows CE embedded systems to control the robot to finish the painting work. Software development is the core of the robot control system, including the direct teaching module, playback module, motion control module, setting module, man-machine interface, alarm module, log module, etc. All the development work of the entire software system has been completed, and it has been verified that the entire software works steady and efficient.

  19. A robotic system for researching social integration in honeybees.

    Directory of Open Access Journals (Sweden)

    Karlo Griparić

    Full Text Available In this paper, we present a novel robotic system developed for researching collective social mechanisms in a biohybrid society of robots and honeybees. The potential for distributed coordination, as observed in nature in many different animal species, has caused an increased interest in collective behaviour research in recent years because of its applicability to a broad spectrum of technical systems requiring robust multi-agent control. One of the main problems is understanding the mechanisms driving the emergence of collective behaviour of social animals. With the aim of deepening the knowledge in this field, we have designed a multi-robot system capable of interacting with honeybees within an experimental arena. The final product, stationary autonomous robot units, designed by specificaly considering the physical, sensorimotor and behavioral characteristics of the honeybees (lat. Apis mallifera, are equipped with sensing, actuating, computation, and communication capabilities that enable the measurement of relevant environmental states, such as honeybee presence, and adequate response to the measurements by generating heat, vibration and airflow. The coordination among robots in the developed system is established using distributed controllers. The cooperation between the two different types of collective systems is realized by means of a consensus algorithm, enabling the honeybees and the robots to achieve a common objective. Presented results, obtained within ASSISIbf project, show successful cooperation indicating its potential for future applications.

  20. A robotic system for researching social integration in honeybees.

    Science.gov (United States)

    Griparić, Karlo; Haus, Tomislav; Miklić, Damjan; Polić, Marsela; Bogdan, Stjepan

    2017-01-01

    In this paper, we present a novel robotic system developed for researching collective social mechanisms in a biohybrid society of robots and honeybees. The potential for distributed coordination, as observed in nature in many different animal species, has caused an increased interest in collective behaviour research in recent years because of its applicability to a broad spectrum of technical systems requiring robust multi-agent control. One of the main problems is understanding the mechanisms driving the emergence of collective behaviour of social animals. With the aim of deepening the knowledge in this field, we have designed a multi-robot system capable of interacting with honeybees within an experimental arena. The final product, stationary autonomous robot units, designed by specificaly considering the physical, sensorimotor and behavioral characteristics of the honeybees (lat. Apis mallifera), are equipped with sensing, actuating, computation, and communication capabilities that enable the measurement of relevant environmental states, such as honeybee presence, and adequate response to the measurements by generating heat, vibration and airflow. The coordination among robots in the developed system is established using distributed controllers. The cooperation between the two different types of collective systems is realized by means of a consensus algorithm, enabling the honeybees and the robots to achieve a common objective. Presented results, obtained within ASSISIbf project, show successful cooperation indicating its potential for future applications.

  1. Learning-based Nonlinear Model Predictive Control to Improve Vision-based Mobile Robot Path Tracking

    Science.gov (United States)

    2015-07-01

    corresponding cost function to be J(u) = ( xd − x)TQx ( xd − x) + uTRu, (20) where Qx ∈ RKnx×Knx is positive semi-definite, R and u are as in (3), xd is a...sequence of desired states, xd = ( xd ,k+1, . . . , xd ,k+K), x is a sequence of predicted states, x = (xk+1, . . . ,xk+K), and K is the given prediction...vact,k−1+b, ωact,k−1+b), based ωk θk vk xd ,i−1 xd ,i xd ,i+1 xk yk Figure 5: Definition of the robot velocities, vk and ωk, and three pose variables

  2. Integration and coordination in a cognitive vision system

    OpenAIRE

    Wrede, Sebastian; Hanheide, Marc; Wachsmuth, Sven; Sagerer, Gerhard

    2006-01-01

    In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information tha...

  3. Nanomedical device and systems design challenges, possibilities, visions

    CERN Document Server

    2014-01-01

    Nanomedical Device and Systems Design: Challenges, Possibilities, Visions serves as a preliminary guide toward the inspiration of specific investigative pathways that may lead to meaningful discourse and significant advances in nanomedicine/nanotechnology. This volume considers the potential of future innovations that will involve nanomedical devices and systems. It endeavors to explore remarkable possibilities spanning medical diagnostics, therapeutics, and other advancements that may be enabled within this discipline. In particular, this book investigates just how nanomedical diagnostic and

  4. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  5. The Systemic Vision of the Educational Learning

    Science.gov (United States)

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  6. Development of a remote controlled robot system for monitoring nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Woo, Hee Gon; Song, Myung Jae; Shin, Hyun Bum; Oh, Gil Hwan; Maeng, Sung Jun; Choi, Byung Jae; Chang, Tae Woo [Korea Electric Power Research Institute, Taejon (Korea, Republic of); Lee, Bum Hee; Yoo, Jun; Choi, Myung Hwan; Go, Nak Yong; Lee, Kee Dong; Lee, Young Dae; Cho, Hae Kyeng; Nam, Yoon Suk [Electric and Science Research Center, (Korea, Republic of)

    1996-12-31

    It`s a final report of the development of remote controlled robot system for monitoring the facilities in nuclear power plant and contains as follows, -Studying the technologies in robot developments and analysing the requirements and working environments - Development of the test mobile robot system - Development of the mobile-robot - Development of the Mounted system on the Mobile robot - Development of the Monitoring system - Mobil-robot applications and future study. In this study we built the basic technologies and schemes for future robot developments and applications. (author). 20 refs., figs.

  7. Soviet Robots in the Solar System Mission Technologies and Discoveries

    CERN Document Server

    Huntress, JR , Wesley T

    2011-01-01

    The Soviet robotic space exploration program began in a spirit of bold adventure and technical genius. It ended after the fall of the Soviet Union and the failure of its last mission to Mars in 1996. Soviet Robots in the Solar System chronicles the scientific and engineering accomplishments of this enterprise from its infancy to its demise. Each flight campaign is set into context of national politics and international competition with the United States. Together with its many detailed illustrations and images, Soviet Robots in the Solar System presents the most detailed technical description of Soviet robotic space flights provides a unique insight into programmatic, engineering, and scientific issues covers mission objectives, spacecraft engineering, flight details, scientific payload and results describes in technical depth Soviet lunar and planetary probes

  8. An Interactive Astronaut-Robot System with Gesture Control

    Directory of Open Access Journals (Sweden)

    Jinguo Liu

    2016-01-01

    Full Text Available Human-robot interaction (HRI plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM is employed to recognize hand gestures and particle swarm optimization (PSO algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL have been selected and used to test and validate the performance of the proposed system.

  9. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  10. BellBot - A Hotel Assistant System Using Mobile Robots

    Directory of Open Access Journals (Sweden)

    Joaquín López

    2013-01-01

    Full Text Available There is a growing interest in applying intelligent technologies to assistant robots. These robots should have a number of characteristics such as autonomy, easy reconfiguration, robust perception systems and they should be oriented towards close interaction with humans. In this paper we present an automatic hotel assistant system based on a series of mobile platforms that interact with guests and service personnel to help them in different tasks. These tasks include bringing small items to customers, showing them different points of interest in the hotel, accompanying the guests to their rooms and providing them with general information. Each robot can also autonomously handle some daily scheduled tasks. Apart from user-initiated and scheduled tasks, the robots can also perform tasks based on events triggered by the building's automation system (BAS. The robots and the BAS are connected to a central server via a local area network. The system was developed with the Robotics Integrated Development Environment (RIDE and was tested intensively in different environments.

  11. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  12. System design for safe robotic handling of nuclear materials

    International Nuclear Information System (INIS)

    Drotning, W.; Wapman, W.; Fahrenholtz, J.; Kimberly, H.; Kuhlmann, J.

    1996-01-01

    Robotic systems are being developed by the Intelligent Systems and Robotics Center at Sandia National Laboratories to perform automated handling tasks with radioactive nuclear materials. These systems will reduce the occupational radiation exposure to workers by automating operations which are currently performed manually. Because the robotic systems will handle material that is both hazardous and valuable, the safety of the operations is of utmost importance; assurance must be given that personnel will not be harmed and that the materials and environment will be protected. These safety requirements are met by designing safety features into the system using a layered approach. Several levels of mechanical, electrical and software safety prevent unsafe conditions from generating a hazard, and bring the system to a safe state should an unexpected situation arise. The system safety features include the use of industrial robot standards, commercial robot systems, commercial and custom tooling, mechanical safety interlocks, advanced sensor systems, control and configuration checks, and redundant control schemes. The effectiveness of the safety features in satisfying the safety requirements is verified using a Failure Modes and Effects Analysis. This technique can point out areas of weakness in the safety design as well as areas where unnecessary redundancy may reduce the system reliability

  13. Low Cost Night Vision System for Intruder Detection

    Science.gov (United States)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  14. A vision fusion treatment system based on ATtiny26L

    Science.gov (United States)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  15. An Intelligent Inference System for Robot Hand Optimal Grasp Preshaping

    Directory of Open Access Journals (Sweden)

    Cabbar Veysel Baysal

    2010-11-01

    Full Text Available This paper presents a novel Intelligent Inference System (IIS for the determination of an optimum preshape for multifingered robot hand grasping, given object under a manipulation task. The IIS is formed as hybrid agent architecture, by the synthesis of object properties, manipulation task characteristics, grasp space partitioning, lowlevel kinematical analysis, evaluation of contact wrench patterns via fuzzy approximate reasoning and ANN structure for incremental learning. The IIS is implemented in software with a robot hand simulation.

  16. A vision system for a Mars rover

    Science.gov (United States)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1988-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  17. A robotic system to characterize soft tailings deposits

    Energy Technology Data Exchange (ETDEWEB)

    Lipsett, M.G.; Dwyer, S.C. [Alberta Univ., Edmonton, AB (Canada). Dept. of Mechanical Engineering

    2009-07-01

    A robotic system for characterizing soft tailings deposits was discussed in this presentation. The system was developed to reduce variability in feedstocks and process performance as well as to improve the trafficability of composite tailings (CT). The method was designed to reliably sample different locations of a soft deposit. Sensors were used to determine water content, clay content, organic matter, and strength. The system included an autonomous rover with a sensor package and teleoperation capability. The system was also designed to be used without automatic controls. The wheeled mobile robot was used to conduct ground contact and soil measurements. The gas-powered robot included on-board microcontrollers and a host computer. The system also featured traction control and fault recovery sub-systems. Wheel contact was used to estimate soil parameters. It was concluded that further research is needed to improve traction control and soil parameter estimation testing capabilities. Overall system block diagrams were included. tabs., figs.

  18. Task Analysis and Descriptions of Required Job Competencies of Robotics/Automated Systems Technicians. Outlines for New Courses and Modules.

    Science.gov (United States)

    Hull, Daniel M.; Lovett, James E.

    The six new robotics and automated systems specialty courses developed by the Robotics/Automated Systems Technician (RAST) project are described in this publication. Course titles are Fundamentals of Robotics and Automated Systems, Automated Systems and Support Components, Controllers for Robots and Automated Systems, Robotics and Automated…

  19. A Multimodal Emotion Detection System during Human-Robot Interaction

    Science.gov (United States)

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  20. The development of robot system for pressurizer maintenance in NPPs

    International Nuclear Information System (INIS)

    Kim, Seung Ho; Kim, Chang Hoi; Jung, Seung Ho; Seo, Yong Chil; Lee, Young Kwang; Go, Byung Yung; Lee, Kwang Won; Lee, Sang Ill; Yun, Jong Yeon; Lee, Hyung Soon; Park, Mig Non; Park, Chang Woo; Cheol, Kwon

    1999-12-01

    The pressurizer that controls the pressure variation of primary coolant system, consists of a vessel, electric heaters and a spray, is one of the safety related equipment in nuclear power plants. Therefore it is required to inspect and maintain it regularly. Because the inside of pressurizer os contaminated by radioactivity, when inspection and repairing it, the radiation exposure of workers is inevitable. In this research two robot system has been developed for inspection and maintenance of the pressurizer for the water filled case and the water sunken case. The one robot system for the water filled case consists of two links, movable gripper using wire string, and support frame for the attachment of robot. The other robot is equipped propeller in order to navigate on the water. It also equipped high performance water resistance camera to make inspection possible. The developed robots are designed under several constraints such as its weight and collision with pressurizer wall. To verify the collision free robot link length and accessibility to the any desired rod heater it is simulated by 3-dimensional graphic simulation software(RobCard). For evaluation stress of the support frame finite element analysis is performed by using the ANSYS code. (author)