WorldWideScience

Sample records for visual robot navigation

  1. Autonomous Robot Navigation based on Visual Landmarks

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2005-01-01

    The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully a...... automatically learn and store visual landmarks, and later recognize these landmarks from arbitrary positions and thus estimate robot position and heading.......The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully...... autonomous navigation and self-localization using automatically selected landmarks. The thesis investigates autonomous robot navigation and proposes a new method which benefits from the potential of the visual sensor to provide accuracy and reliability to the navigation process while relying on naturally...

  2. Model-base visual navigation of a mobile robot

    International Nuclear Information System (INIS)

    Roening, J.

    1992-08-01

    The thesis considers the problems of visual guidance of a mobile robot. A visual navigation system is formalized consisting of four basic components: world modelling, navigation sensing, navigation and action. According to this formalization an experimental system is designed and realized enabling real-world navigation experiments. A priori knowledge of the world is used for global path finding, aiding scene analysis and providing feedback information to the close the control loop between planned and actual movements. Two world models were developed. The first approach was a map-based model especially designed for low-level description of indoor environments. The other was a higher level and more symbolic representation of the surroundings utilizing the spatial graph concept. Two passive vision approaches were developed to extract navigation information. With passive three- camera stereovision a sparse depth map of the scene was produced. Another approach employed a fish-eye lens to map the entire scene of the surroundings without camera scanning. The local path planning of the system is supported by three-dimensional scene interpreter providing a partial understanding of scene contents. The interpreter consists of data-driven low-level stages and a model-driven high-level stage. Experiments were carried out in a simulator and test vehicle constructed in the laboratory. The test vehicle successfully navigated indoors

  3. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    Science.gov (United States)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  4. Practical indoor mobile robot navigation using hybrid maps

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Fan, Zhun; Xiao, Jizhong

    2011-01-01

    This paper presents a practical navigation scheme for indoor mobile robots using hybrid maps. The method makes use of metric maps for local navigation and a topological map for global path planning. Metric maps are generated as 2D occupancy grids by a range sensor to represent local information...... about partial areas. The global topological map is used to indicate the connectivity of the 'places-of-interests' in the environment and the interconnectivity of the local maps. Visual tags on the ceiling to be detected by the robot provide valuable information and contribute to reliable localization...... robot and evaluated in a hospital environment....

  5. Navigation Strategy by Contact Sensing Interaction for a Biped Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Hanafiah Yussof

    2008-11-01

    Full Text Available This report presents a basic contact interaction-based navigation strategy for a biped humanoid robot to support current visual-based navigation. The robot's arms were equipped with force sensors to detect physical contact with objects. We proposed a motion algorithm consisting of searching tasks, self-localization tasks, correction of locomotion direction tasks and obstacle avoidance tasks. Priority was given to right-side direction to navigate the robot locomotion. Analysis of trajectory generation, biped gait pattern, and biped walking characteristics was performed to define an efficient navigation strategy in a biped walking humanoid robot. The proposed algorithm is evaluated in an experiment with a 21-dofs humanoid robot operating in a room with walls and obstacles. The experimental results reveal good robot performance when recognizing objects by touching, grasping, and continuously generating suitable trajectories to correct direction and avoid collisions.

  6. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  7. A Visual-Aided Inertial Navigation and Mapping System

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-05-01

    Full Text Available State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU and a position sensor (GPS. The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers. The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i terrain analysis; ii three-dimensional (3D scene reconstruction; iii localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.

  8. Building a grid-semantic map for the navigation of service robots through human–robot interaction

    Directory of Open Access Journals (Sweden)

    Cheng Zhao

    2015-11-01

    Full Text Available This paper presents an interactive approach to the construction of a grid-semantic map for the navigation of service robots in an indoor environment. It is based on the Robot Operating System (ROS framework and contains four modules, namely Interactive Module, Control Module, Navigation Module and Mapping Module. Three challenging issues have been focused during its development: (i how human voice and robot visual information could be effectively deployed in the mapping and navigation process; (ii how semantic names could combine with coordinate data in an online Grid-Semantic map; and (iii how a localization–evaluate–relocalization method could be used in global localization based on modified maximum particle weight of the particle swarm. A number of experiments are carried out in both simulated and real environments such as corridors and offices to verify its feasibility and performance.

  9. Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Li Wang

    2018-01-01

    Full Text Available In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the “side” recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera. Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework.

  10. Control algorithms for autonomous robot navigation

    International Nuclear Information System (INIS)

    Jorgensen, C.C.

    1985-01-01

    This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced

  11. Empirical evaluation of a practical indoor mobile robot navigation method using hybrid maps

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Fan, Zhun; Xiao, Jizhong

    2010-01-01

    This video presents a practical navigation scheme for indoor mobile robots using hybrid maps. The method makes use of metric maps for local navigation and a topological map for global path planning. Metric maps are generated as occupancy grids by a laser range finder to represent local information...... about partial areas. The global topological map is used to indicate the connectivity of the ‘places-of-interests’ in the environment and the interconnectivity of the local maps. Visual tags on the ceiling to be detected by the robot provide valuable information and contribute to reliable localization...... that the method is implemented successfully on physical robot in a hospital environment, which provides a practical solution for indoor navigation....

  12. Mobile Robot Designed with Autonomous Navigation System

    Science.gov (United States)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  13. Virtual Reality, 3D Stereo Visualization, and Applications in Robotics

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    , while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work investigates stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This work also...

  14. A fuzzy logic based navigation for mobile robot

    International Nuclear Information System (INIS)

    Adel Ali S Al-Jumaily; Shamsudin M Amin; Mohamed Khalil

    1998-01-01

    The main issue of intelligent robot is how to reach its goal safely in real time when it moves in unknown environment. The navigational planning is becoming the central issue in development of real-time autonomous mobile robots. Behaviour based robots have been successful in reacting with dynamic environment but still there are some complexity and challenging problems. Fuzzy based behaviours present as powerful method to solve the real time reactive navigation problems in unknown environment. We shall classify the navigation generation methods, five some characteristics of these methods, explain why fuzzy logic is suitable for the navigation of mobile robot and automated guided vehicle, and describe a reactive navigation that is flexible to react through their behaviours to the change of the environment. Some simulation results will be presented to show the navigation of the robot. (Author)

  15. Neurosurgical robotic arm drilling navigation system.

    Science.gov (United States)

    Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai

    2017-09-01

    The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Bio-robots automatic navigation with electrical reward stimulation.

    Science.gov (United States)

    Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2012-01-01

    Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.

  17. Robotics_MobileRobot Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Robots and rovers exploring planets need to autonomously navigate to specified locations. Advanced Scientific Concepts, Inc. (ASC) and the University of Minnesota...

  18. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    Science.gov (United States)

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  19. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  20. 6-DOF Pose Estimation of a Robotic Navigation Aid by Tracking Visual and Geometric Features.

    Science.gov (United States)

    Ye, Cang; Hong, Soonhac; Tamjidi, Amirhossein

    2015-10-01

    This paper presents a 6-DOF Pose Estimation (PE) method for a Robotic Navigation Aid (RNA) for the visually impaired. The RNA uses a single 3D camera for PE and object detection. The proposed method processes the camera's intensity and range data to estimates the camera's egomotion that is then used by an Extended Kalman Filter (EKF) as the motion model to track a set of visual features for PE. A RANSAC process is employed in the EKF to identify inliers from the visual feature correspondences between two image frames. Only the inliers are used to update the EKF's state. The EKF integrates the egomotion into the camera's pose in the world coordinate system. To retain the EKF's consistency, the distance between the camera and the floor plane (extracted from the range data) is used by the EKF as the observation of the camera's z coordinate. Experimental results demonstrate that the proposed method results in accurate pose estimates for positioning the RNA in indoor environments. Based on the PE method, a wayfinding system is developed for localization of the RNA in a home environment. The system uses the estimated pose and the floorplan to locate the RNA user in the home environment and announces the points of interest and navigational commands to the user through a speech interface. This work was motivated by the limitations of the existing navigation technology for the visually impaired. Most of the existing methods use a point/line measurement sensor for indoor object detection. Therefore, they lack capability in detecting 3D objects and positioning a blind traveler. Stereovision has been used in recent research. However, it cannot provide reliable depth data for object detection. Also, it tends to produce a lower localization accuracy because its depth measurement error quadratically increases with the true distance. This paper suggests a new approach for navigating a blind traveler. The method uses a single 3D time-of-flight camera for both 6-DOF PE and 3D object

  1. Laboratory experiments in mobile robot navigation

    International Nuclear Information System (INIS)

    Kar, Asim; Pal, Prabir K.

    1997-01-01

    Mobile robots have potential applications in remote surveillance and operation in hazardous areas. To be effective, they must have the ability to navigate on their own to desired locations. Several experimental navigational runs of a mobile robot developed have been conducted. The robot has three wheels of which the front wheel is steered and the hind wheels are driven. The robot is equipped with an ultrasonic range sensor, which is turned around to get range data in all directions. The range data is fed to the input of a neural net, whose output steers the robot towards the goal. The robot is powered by batteries (12V 10Ah). It has an onboard stepper motor controller for driving the wheels and the ultrasonic setup. It also has an onboard computer which runs the navigation program NAV. This program sends the range data and configuration parameters to the operator''s console program OCP, running on a stationary PC, through radio communication on a serial line. Through OCP, an operator can monitor the progress of the robot from a distant control room and intervene if necessary. In this paper the control modules of the mobile robot, its ways of operation and also results of some of the experimental runs recorded are reported. It is seen that the trained net guides the mobile robot through gaps of 1m and above to its destination with about 84% success measured over a small sample of 38 runs

  2. Solar-based navigation for robotic explorers

    Science.gov (United States)

    Shillcutt, Kimberly Jo

    2000-12-01

    This thesis introduces the application of solar position and shadowing information to robotic exploration. Power is a critical resource for robots with remote, long-term missions, so this research focuses on the power generation capabilities of robotic explorers during navigational tasks, in addition to power consumption. Solar power is primarily considered, with the possibility of wind power also contemplated. Information about the environment, including the solar ephemeris, terrain features, time of day, and surface location, is incorporated into a planning structure, allowing robots to accurately predict shadowing and thus potential costs and gains during navigational tasks. By evaluating its potential to generate and expend power, a robot can extend its lifetime and accomplishments. The primary tasks studied are coverage patterns, with a variety of plans developed for this research. The use of sun, terrain and temporal information also enables new capabilities of identifying and following sun-synchronous and sun-seeking paths. Digital elevation maps are combined with an ephemeris algorithm to calculate the altitude and azimuth of the sun from surface locations, and to identify and map shadows. Solar navigation path simulators use this information to perform searches through two-dimensional space, while considering temporal changes. Step by step simulations of coverage patterns also incorporate time in addition to location. Evaluations of solar and wind power generation, power consumption, area coverage, area overlap, and time are generated for sets of coverage patterns, with on-board environmental information linked to the simulations. This research is implemented on the Nomad robot for the Robotic Antarctic Meteorite Search. Simulators have been developed for coverage pattern tests, as well as for sun-synchronous and sun-seeking path searches. Results of field work and simulations are reported and analyzed, with demonstrated improvements in efficiency

  3. Benchmark Framework for Mobile Robots Navigation Algorithms

    Directory of Open Access Journals (Sweden)

    Nelson David Muñoz-Ceballos

    2014-01-01

    Full Text Available Despite the wide variety of studies and research on mobile robot systems, performance metrics are not often examined. This makes difficult to establish an objective comparison of achievements. In this paper, the navigation of an autonomous mobile robot is evaluated. Several metrics are described. These metrics, collectively, provide an indication of navigation quality, useful for comparing and analyzing navigation algorithms of mobile robots. This method is suggested as an educational tool, which allows the student to optimize the algorithms quality, relating to important aspectsof science, technology and engineering teaching, as energy consumption, optimization and design.

  4. An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas

    Directory of Open Access Journals (Sweden)

    David Zapata

    2013-01-01

    Full Text Available There are many outdoor robotic applications where a robot must reach a goal position or explore an area without previous knowledge of the environment around it. Additionally, other applications (like path planning require the use of known maps or previous information of the environment. This work presents a system composed by a terrestrial and an aerial robot that cooperate and share sensor information in order to address those requirements. The ground robot is able to navigate in an unknown large environment aided by visual feedback from a camera on board the aerial robot. At the same time, the obstacles are mapped in real-time by putting together the information from the camera and the positioning system of the ground robot. A set of experiments were carried out with the purpose of verifying the system applicability. The experiments were performed in a simulation environment and outdoor with a medium-sized ground robot and a mini quad-rotor. The proposed robotic system shows outstanding results in simultaneous navigation and mapping applications in large outdoor environments.

  5. Enhancing fuzzy robot navigation systems by mimicking human visual perception of natural terrain traversibility

    Science.gov (United States)

    Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.

    2001-01-01

    This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.

  6. A Qualitative Approach to Mobile Robot Navigation Using RFID

    International Nuclear Information System (INIS)

    Hossain, M; Rashid, M M; Bhuiyan, M M I; Ahmed, S; Akhtaruzzaman, M

    2013-01-01

    Radio Frequency Identification (RFID) system allows automatic identification of items with RFID tags using radio-waves. As the RFID tag has its unique identification number, it is also possible to detect a specific region where the RFID tag lies in. Recently it is widely been used in mobile robot navigation, localization, and mapping both in indoor and outdoor environment. This paper represents a navigation strategy for autonomous mobile robot using passive RFID system. Conventional approaches, such as landmark or dead-reckoning with excessive number of sensors, have complexities in establishing the navigation and localization process. The proposed method satisfies less complexity in navigation strategy as well as estimation of not only the position but also the orientation of the autonomous robot. In this research, polar coordinate system is adopted on the navigation surface where RFID tags are places in a grid with constant displacements. This paper also presents the performance comparisons among various grid architectures through simulation to establish a better solution of the navigation system. In addition, some stationary obstacles are introduced in the navigation environment to satisfy the viability of the navigation process of the autonomous mobile robot

  7. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  8. Navigation strategies for multiple autonomous mobile robots moving in formation

    Science.gov (United States)

    Wang, P. K. C.

    1991-01-01

    The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.

  9. Tandem-robot assisted laparoscopic radical prostatectomy to improve the neurovascular bundle visualization: a feasibility study.

    Science.gov (United States)

    Han, Misop; Kim, Chunwoo; Mozer, Pierre; Schäfer, Felix; Badaan, Shadie; Vigaru, Bogdan; Tseng, Kenneth; Petrisor, Doru; Trock, Bruce; Stoianovici, Dan

    2011-02-01

    To examine the feasibility of image-guided navigation using transrectal ultrasound (TRUS) to visualize the neurovascular bundle (NVB) during robot-assisted laparoscopic radical prostatectomy (RALP). The preservation of the NVB during radical prostatectomy improves the postoperative recovery of sexual potency. The accompanying blood vessels in the NVB can serve as a macroscopic landmark to localize the microscopic cavernous nerves in the NVB. A novel, robotic transrectal ultrasound probe manipulator (TRUS Robot) and three-dimensional (3-D) reconstruction software were developed and used concurrently with the daVinci surgical robot (Intuitive Surgical, Inc., Sunnyvale, CA) in a tandem-robot assisted laparoscopic radical prostatectomy (T-RALP). After appropriate approval and informed consent were obtained, 3 subjects underwent T-RALP without associated complications. The TRUS Robot allowed a steady handling and remote manipulation of the TRUS probe during T-RALP. It also tracked the TRUS probe position accurately and allowed 3-D image reconstruction of the prostate and surrounding structures. Image navigation was performed by observing the tips of the daVinci surgical instruments in the live TRUS image. Blood vessels in the NVB were visualized using Doppler ultrasound. Intraoperative 3-D image-guided navigation in T-RALP is feasible. The use of TRUS during radical prostatectomy can potentially improve the visualization and preservation of the NVB. Further studies are needed to assess the clinical benefit of T-RALP. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Human-robot collaborative navigation for autonomous maintenance management of nuclear installation

    International Nuclear Information System (INIS)

    Nugroho, Djoko Hari

    2002-01-01

    Development of human and robot collaborative navigation for autonomous maintenance management of nuclear installation has been conducted. The human-robot collaborative system is performed using a switching command between autonomous navigation and manual navigation that incorporate a human intervention. The autonomous navigation path is conducted using a novel algorithm of MLG method based on Lozano-Perez s visibility graph. The MLG optimizes the shortest distance and safe constraints. While the manual navigation is performed using manual robot tele operation tools. Experiment in the MLG autonomous navigation system is conducted for six times with 3-D starting point and destination point coordinate variation. The experiment shows a good performance of autonomous robot maneuver to avoid collision with obstacle. The switching navigation is well interpreted using open or close command to RS-232C constructed using LabVIEW

  11. Visual and tactile interfaces for bi-directional human robot communication

    Science.gov (United States)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  12. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    Science.gov (United States)

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  13. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    Science.gov (United States)

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  14. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    Directory of Open Access Journals (Sweden)

    Darío Maravall

    2017-08-01

    Full Text Available We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV in typical indoor navigation tasks.

  15. Integrated navigation and control software system for MRI-guided robotic prostate interventions.

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S; DiMaio, Simon P; Gobbi, David G; Csoma, Csaba; Mewes, Philip W; Fichtinger, Gabor; Tempany, Clare M; Hata, Nobuhiko

    2010-01-01

    A software system to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and the open-source navigation software are connected together via Ethernet to exchange commands, coordinates, and images using an open network communication protocol, OpenIGTLink. The system has six states called "workphases" that provide the necessary synchronization of all components during each stage of the clinical workflow, and the user interface guides the operator linearly through these workphases. On top of this framework, the software provides the following features for needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MR images of needle trajectories in the prostate. These features are supported by calibration of robot and image coordinates by fiducial-based registration. Performance tests show that the registration error of the system was 2.6mm within the prostate volume. Registered real-time 2D images were displayed 1.97 s after the image location is specified. Copyright 2009 Elsevier Ltd. All rights reserved.

  16. Integrated navigation and control software system for MRI-guided robotic prostate interventions

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S.; DiMaio, Simon P.; Gobbi, David G.; Csoma, Csaba; Mewes, Philip W.; Fichtinger, Gabor; Tempany, Clare M.; Hata, Nobuhiko

    2010-01-01

    A software system to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and the open-source navigation software are connected together via Ethernet to exchange commands, coordinates, and images using an open network communication protocol, OpenIGTLink. The system has six states called “workphases” that provide the necessary synchronization of all components during each stage of the clinical workflow, and the user interface guides the operator linearly through these workphases. On top of this framework, the software provides the following features for needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MR images of needle trajectories in the prostate. These features are supported by calibration of robot and image coordinates by fiducial-based registration. Performance tests show that the registration error of the system was 2.6 mm within the prostate volume. Registered real-time 2D images were displayed 1.97 s after the image location is specified. PMID:19699057

  17. Stereo-Based Visual Odometry for Autonomous Robot Navigation

    Directory of Open Access Journals (Sweden)

    Ioannis Kostavelis

    2016-02-01

    Full Text Available Mobile robots should possess accurate self-localization capabilities in order to be successfully deployed in their environment. A solution to this challenge may be derived from visual odometry (VO, which is responsible for estimating the robot's pose by analysing a sequence of images. The present paper proposes an accurate, computationally-efficient VO algorithm relying solely on stereo vision images as inputs. The contribution of this work is twofold. Firstly, it suggests a non-iterative outlier detection technique capable of efficiently discarding the outliers of matched features. Secondly, it introduces a hierarchical motion estimation approach that produces refinements to the global position and orientation for each successive step. Moreover, for each subordinate module of the proposed VO algorithm, custom non-iterative solutions have been adopted. The accuracy of the proposed system has been evaluated and compared with competent VO methods along DGPS-assessed benchmark routes. Experimental results of relevance to rough terrain routes, including both simulated and real outdoors data, exhibit remarkable accuracy, with positioning errors lower than 2%.

  18. Outer navigation of a inspection robot by means of feedback of global guidance

    International Nuclear Information System (INIS)

    Segovia de los R, A.; Bucio V, F.; Garduno G, M.

    2008-01-01

    The objective of this article is the presentation of an inspection system to mobile robot navigating in exteriors by means of the employment of a feedback of instantaneous guidance with respect to a global reference throughout moment of the displacement. The robot evolves obeying the commands coming from the one tele operator which indicates the diverse addresses by means of the operation console that the robot should take using for it information provided by an electronic compass. The mobile robot employee in the experimentations is a Pioneer 3-AT, which counts with a sensor series required to obtain an operation of more autonomy. The electronic compass offers geographical information coded in a format SPI, reason for which a micro controller (μC) economic of general use has been an employee for to transfer the information to the format RS-232, originally used by the Pioneer 3-AT. The orientation information received by the robot by means of their serial port RS-232 secondary it is forwarded to the computer hostess in the one which a program Java is used to generate the commands for the robot navigation control and to deploy one graphic interface user utilized to receive the order of the operator. This research is part of an ambitious project in which it is tried to count on an inspection system and monitoring of sites in which risks of high radiation levels could exist, thus a navigation systems in exteriors could be very useful. The complete system will count besides the own sensors of the robot, with certain numbers of agree sensors to the variables that are desired to monitor. The resulting values of such measurements will be visualized in real time in the graphic interface user, thanks to a bidirectional wireless communication among the station of operation and the mobile robot. (Author)

  19. Navigation control of a multi-functional eye robot

    International Nuclear Information System (INIS)

    Ali, F.A.M.; Hashmi, B.; Younas, A.; Abid, B.

    2016-01-01

    The advancement in robotic field is enhanced rigorously in the past Few decades. Robots are being used in different fields of science as well as warfare. The research shows that in the near future, robots would be able to serve in fighting wars. Different countries and their armies have already deployed several military robots. However, there exist some drawbacks of robots like their inefficiency and inability to work under abnormal conditions. Ascent of artificial intelligence may resolve this issue in the coming future. The main focus of this paper is to provide a low cost and long range most efficient mechanical as well as software design of an Eye Robot. Using a blend of robotics and image processing with an addition of artificial intelligence path navigation techniques, this project is designed and implemented by controlling the robot (including robotic arm and camera) through a 2.4 GHz RF module manually. Autonomous function of the robot includes navigation based on the path assigned to the robot. The path is drawn on a VB based application and then transferred to the robot wirelessly or through serial port. A Wi-Fi based Optical Character Recognition (OCR) implemented video streaming can also be observed at remote devices like laptops. (author)

  20. Interaction dynamics of multiple mobile robots with simple navigation strategies

    Science.gov (United States)

    Wang, P. K. C.

    1989-01-01

    The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.

  1. Behaviour based Mobile Robot Navigation Technique using AI System: Experimental Investigation on Active Media Pioneer Robot

    Directory of Open Access Journals (Sweden)

    S. Parasuraman, V.Ganapathy

    2012-10-01

    Full Text Available A key issue in the research of an autonomous robot is the design and development of the navigation technique that enables the robot to navigate in a real world environment. In this research, the issues investigated and methodologies established include (a Designing of the individual behavior and behavior rule selection using Alpha level fuzzy logic system  (b Designing of the controller, which maps the sensors input to the motor output through model based Fuzzy Logic Inference System and (c Formulation of the decision-making process by using Alpha-level fuzzy logic system. The proposed method is applied to Active Media Pioneer Robot and the results are discussed and compared with most accepted methods. This approach provides a formal methodology for representing and implementing the human expert heuristic knowledge and perception-based action in mobile robot navigation. In this approach, the operational strategies of the human expert driver are transferred via fuzzy logic to the robot navigation in the form of a set of simple conditional statements composed of linguistic variables.Keywards: Mobile robot, behavior based control, fuzzy logic, alpha level fuzzy logic, obstacle avoidance behavior and goal seek behavior

  2. Intelligent navigation and accurate positioning of an assist robot in indoor environments

    Science.gov (United States)

    Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke

    2017-12-01

    Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.

  3. Memristive device based learning for navigation in robots.

    Science.gov (United States)

    Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A

    2017-11-08

    Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.

  4. Real Time Mapping and Dynamic Navigation for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Maki K. Habib

    2008-11-01

    Full Text Available This paper discusses the importance, the complexity and the challenges of mapping mobile robot?s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.

  5. Exploration and Navigation for Mobile Robots With Perceptual Limitations

    Directory of Open Access Journals (Sweden)

    Leonardo Romero

    2006-09-01

    Full Text Available To learn a map of an environment a mobile robot has to explore its workspace using its sensors. Sensors are noisy and have perceptual limitations that must be considered while learning a map. This paper considers a mobile robot with sensor perceptual limitations and introduces a new method for exploring and navigating autonomously in indoor environments. To minimize the risk of collisions as well as to not exceed the range of sensors, we introduce the concept of a travel space as a way to associate costs to grid cells of the map, based on distances to obstacles. During exploration the mobile robot minimizes its movements, including rotations, to reach the nearest unexplored region of the environment, using a dynamic programming algorithm. Once the exploration ends, the travel space is used to form a roadmap, a net of safe roads that the mobile robot can use for navigation. These exploration and navigation method are tested using a simulated and a real mobile robot with promising results.

  6. Exploration and Navigation for Mobile Robots With Perceptual Limitations

    Directory of Open Access Journals (Sweden)

    Eduardo F. Morales

    2008-11-01

    Full Text Available To learn a map of an environment a mobile robot has to explore its workspace using its sensors. Sensors are noisy and have perceptual limitations that must be considered while learning a map. This paper considers a mobile robot with sensor perceptual limitations and introduces a new method for exploring and navigating autonomously in indoor environments. To minimize the risk of collisions as well as to not exceed the range of sensors, we introduce the concept of a travel space as a way to associate costs to grid cells of the map, based on distances to obstacles. During exploration the mobile robot minimizes its movements, including rotations, to reach the nearest unexplored region of the environment, using a dynamic programming algorithm. Once the exploration ends, the travel space is used to form a roadmap, a net of safe roads that the mobile robot can use for navigation. These exploration and navigation method are tested using a simulated and a real mobile robot with promising results.

  7. Navigation of robotic system using cricket motes

    Science.gov (United States)

    Patil, Yogendra J.; Baine, Nicholas A.; Rattan, Kuldip S.

    2011-06-01

    This paper presents a novel algorithm for self-mapping of the cricket motes that can be used for indoor navigation of autonomous robotic systems. The cricket system is a wireless sensor network that can provide indoor localization service to its user via acoustic ranging techniques. The behavior of the ultrasonic transducer on the cricket mote is studied and the regions where satisfactorily distance measurements can be obtained are recorded. Placing the motes in these regions results fine-grain mapping of the cricket motes. Trilateration is used to obtain a rigid coordinate system, but is insufficient if the network is to be used for navigation. A modified SLAM algorithm is applied to overcome the shortcomings of trilateration. Finally, the self-mapped cricket motes can be used for navigation of autonomous robotic systems in an indoor location.

  8. Navigation and Robotics in Spinal Surgery: Where Are We Now?

    Science.gov (United States)

    Overley, Samuel C; Cho, Samuel K; Mehta, Ankit I; Arnold, Paul M

    2017-03-01

    Spine surgery has experienced much technological innovation over the past several decades. The field has seen advancements in operative techniques, implants and biologics, and equipment such as computer-assisted navigation and surgical robotics. With the arrival of real-time image guidance and navigation capabilities along with the computing ability to process and reconstruct these data into an interactive three-dimensional spinal "map", so too have the applications of surgical robotic technology. While spinal robotics and navigation represent promising potential for improving modern spinal surgery, it remains paramount to demonstrate its superiority as compared to traditional techniques prior to assimilation of its use amongst surgeons.The applications for intraoperative navigation and image-guided robotics have expanded to surgical resection of spinal column and intradural tumors, revision procedures on arthrodesed spines, and deformity cases with distorted anatomy. Additionally, these platforms may mitigate much of the harmful radiation exposure in minimally invasive surgery to which the patient, surgeon, and ancillary operating room staff are subjected.Spine surgery relies upon meticulous fine motor skills to manipulate neural elements and a steady hand while doing so, often exploiting small working corridors utilizing exposures that minimize collateral damage. Additionally, the procedures may be long and arduous, predisposing the surgeon to both mental and physical fatigue. In light of these characteristics, spine surgery may actually be an ideal candidate for the integration of navigation and robotic-assisted procedures.With this paper, we aim to critically evaluate the current literature and explore the options available for intraoperative navigation and robotic-assisted spine surgery. Copyright © 2016 by the Congress of Neurological Surgeons.

  9. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    Directory of Open Access Journals (Sweden)

    Lobo Pereira Fernando

    2010-02-01

    Full Text Available Abstract Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous. The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI. Methods In this paper, a sequential Extended Kalman Filter (EKF feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how

  10. Development of an advanced intelligent robot navigation system

    International Nuclear Information System (INIS)

    Hai Quan Dai; Dalton, G.R.; Tulenko, J.; Crane, C.C. III

    1992-01-01

    As part of the US Department of Energy's Robotics for Advanced Reactors Project, the authors are in the process of assembling an advanced intelligent robotic navigation and control system based on previous work performed on this project in the areas of computer control, database access, graphical interfaces, shared data and computations, computer vision for positions determination, and sonar-based computer navigation systems. The system will feature three levels of goals: (1) high-level system for management of lower level functions to achieve specific functional goals; (2) intermediate level of goals such as position determination, obstacle avoidance, and discovering unexpected objects; and (3) other supplementary low-level functions such as reading and recording sonar or video camera data. In its current phase, the Cybermotion K2A mobile robot is not equipped with an onboard computer system, which will be included in the final phase. By that time, the onboard system will play important roles in vision processing and in robotic control communication

  11. Illumination Tolerance for Visual Navigation with the Holistic Min-Warping Method

    Directory of Open Access Journals (Sweden)

    Ralf Möller

    2014-02-01

    Full Text Available Holistic visual navigation methods are an emerging alternative to the ubiquitous feature-based methods. Holistic methods match entire images pixel-wise instead of extracting and comparing local feature descriptors. In this paper we investigate which pixel-wise distance measures are most suitable for the holistic min-warping method with respect to illumination invariance. Two novel approaches are presented: tunable distance measures—weighted combinations of illumination-invariant and illumination-sensitive terms—and two novel forms of “sequential” correlation which are only invariant against intensity shifts but not against multiplicative changes. Navigation experiments on indoor image databases collected at the same locations but under different conditions of illumination demonstrate that tunable distance measures perform optimally by mixing their two portions instead of using the illumination-invariant term alone. Sequential correlation performs best among all tested methods, and as well but much faster in an approximated form. Mixing with an additional illumination-sensitive term is not necessary for sequential correlation. We show that min-warping with approximated sequential correlation can successfully be applied to visual navigation of cleaning robots.

  12. Integrated navigation of aerial robot for GPS and GPS-denied environment

    International Nuclear Information System (INIS)

    Suzuki, Satoshi; Min, Hongkyu; Nonami, Kenzo; Wada, Tetsuya

    2016-01-01

    In this study, novel robust navigation system for aerial robot in GPS and GPS- denied environments is proposed. Generally, the aerial robot uses position and velocity information from Global Positioning System (GPS) for guidance and control. However, GPS could not be used in several environments, for example, GPS has huge error near buildings and trees, indoor, and so on. In such GPS-denied environment, Laser Detection and Ranging (LIDER) sensor based navigation system have generally been used. However, LIDER sensor also has an weakness, and it could not be used in the open outdoor environment where GPS could be used. Therefore, it is desired to develop the integrated navigation system which is seamlessly applied to GPS and GPS-denied environments. In this paper, the integrated navigation system for aerial robot using GPS and LIDER is developed. The navigation system is designed based on Extended Kalman Filter, and the effectiveness of the developed system is verified by numerical simulation and experiment. (paper)

  13. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    Directory of Open Access Journals (Sweden)

    Emmanuele eTidoni

    2014-06-01

    Full Text Available Advancement in brain computer interfaces (BCI technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid’s walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI’s user and help in the feeling of control over it. Our results shed light on the possibility to increase robot’s control through the combination of multisensory feedback to a BCI user.

  14. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  15. Mobile robot navigation in unknown static environments using ANFIS controller

    Directory of Open Access Journals (Sweden)

    Anish Pandey

    2016-09-01

    Full Text Available Navigation and obstacle avoidance are the most important task for any mobile robots. This article presents the Adaptive Neuro-Fuzzy Inference System (ANFIS controller for mobile robot navigation and obstacle avoidance in the unknown static environments. The different sensors such as ultrasonic range finder sensor and sharp infrared range sensor are used to detect the forward obstacles in the environments. The inputs of the ANFIS controller are obstacle distances obtained from the sensors, and the controller output is a robot steering angle. The primary objective of the present work is to use ANFIS controller to guide the mobile robot in the given environments. Computer simulations are conducted through MATLAB software and implemented in real time by using C/C++ language running Arduino microcontroller based mobile robot. Moreover, the successful experimental results on the actual mobile robot demonstrate the effectiveness and efficiency of the proposed controller.

  16. Robot navigation in unknown terrains: Introductory survey of non-heuristic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V. [Oak Ridge National Lab., TN (US); Kareti, S.; Shi, Weimin [Old Dominion Univ., Norfolk, VA (US). Dept. of Computer Science; Iyengar, S.S. [Louisiana State Univ., Baton Rouge, LA (US). Dept. of Computer Science

    1993-07-01

    A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors consider the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.

  17. Multi-Sensor SLAM Approach for Robot Navigation

    Directory of Open Access Journals (Sweden)

    Sid Ahmed BERRABAH

    2010-12-01

    Full Text Available o be able to operate and act successfully, the robot needs to know at any time where it is. This means the robot has to find out its location relative to the environment. This contribution introduces the increase of accuracy of mobile robot positioning in large outdoor environments based on data fusion from different sensors: camera, GPS, inertial navigation system (INS, and wheel encoders. The fusion is done in a Simultaneous Localization and Mapping (SLAM approach. The paper gives an overview on the proposed algorithm and discusses the obtained results.

  18. Localization from Visual Landmarks on a Free-Flying Robot

    Science.gov (United States)

    Coltin, Brian; Fusco, Jesse; Moratto, Zack; Alexandrov, Oleg; Nakamura, Robert

    2016-01-01

    We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.

  19. Markovian robots: Minimal navigation strategies for active particles

    Science.gov (United States)

    Nava, Luis Gómez; Großmann, Robert; Peruani, Fernando

    2018-04-01

    We explore minimal navigation strategies for active particles in complex, dynamical, external fields, introducing a class of autonomous, self-propelled particles which we call Markovian robots (MR). These machines are equipped with a navigation control system (NCS) that triggers random changes in the direction of self-propulsion of the robots. The internal state of the NCS is described by a Boolean variable that adopts two values. The temporal dynamics of this Boolean variable is dictated by a closed Markov chain—ensuring the absence of fixed points in the dynamics—with transition rates that may depend exclusively on the instantaneous, local value of the external field. Importantly, the NCS does not store past measurements of this value in continuous, internal variables. We show that despite the strong constraints, it is possible to conceive closed Markov chain motifs that lead to nontrivial motility behaviors of the MR in one, two, and three dimensions. By analytically reducing the complexity of the NCS dynamics, we obtain an effective description of the long-time motility behavior of the MR that allows us to identify the minimum requirements in the design of NCS motifs and transition rates to perform complex navigation tasks such as adaptive gradient following, detection of minima or maxima, or selection of a desired value in a dynamical, external field. We put these ideas in practice by assembling a robot that operates by the proposed minimalistic NCS to evaluate the robustness of MR, providing a proof of concept that is possible to navigate through complex information landscapes with such a simple NCS whose internal state can be stored in one bit. These ideas may prove useful for the engineering of miniaturized robots.

  20. Bio-robots automatic navigation with graded electric reward stimulation based on Reinforcement Learning.

    Science.gov (United States)

    Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.

  1. Neurobiologically inspired mobile robot navigation and planning

    Directory of Open Access Journals (Sweden)

    Mathias Quoy

    2007-11-01

    Full Text Available After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type “transition cells” that encompasses traditional “place cells”.

  2. Navigation Algorithm Using Fuzzy Control Method in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Cviklovič Vladimír

    2016-03-01

    Full Text Available The issue of navigation methods is being continuously developed globally. The aim of this article is to test the fuzzy control algorithm for track finding in mobile robotics. The concept of an autonomous mobile robot EN20 has been designed to test its behaviour. The odometry navigation method was used. The benefits of fuzzy control are in the evidence of mobile robot’s behaviour. These benefits are obtained when more physical variables on the base of more input variables are controlled at the same time. In our case, there are two input variables - heading angle and distance, and two output variables - the angular velocity of the left and right wheel. The autonomous mobile robot is moving with human logic.

  3. Efficient Reactive Navigation with Exact Collision Determination for 3D Robot Shapes

    Directory of Open Access Journals (Sweden)

    Mariano Jaimez

    2015-05-01

    Full Text Available This paper presents a reactive navigator for wheeled mobile robots moving on a flat surface which takes into account both the actual 3D shape of the robot and the 3D surrounding obstacles. The robot volume is modelled by a number of prisms consecutive in height, and the detected obstacles, which can be provided by different kinds of range sensor, are segmented into these heights. Then, the reactive navigation problem is tackled by a number of concurrent 2D navigators, one for each prism, which are consistently and efficiently combined to yield an overall solution. Our proposal for each 2D navigator is based on the concept of the “Parameterized Trajectory Generator” which models the robot shape as a polygon and embeds its kinematic constraints into different motion models. Extensive testing has been conducted in office-like and real house environments, covering a total distance of 18.5 km, to demonstrate the reliability and effectiveness of the proposed method. Moreover, additional experiments are performed to highlight the advantages of a 3D-aware reactive navigator. The implemented code is available under an open-source licence.

  4. Outer navigation of a inspection robot by means of feedback of global guidance; Navegacion exterior de un robot de inspeccion mediante retroalimentacion de la orientacion global

    Energy Technology Data Exchange (ETDEWEB)

    Segovia de los R, A.; Bucio V, F. [ININ, 52750 La Marquesa, Estado de Mexico (Mexico); Garduno G, M. [Instituto Tecnologico de Toluca, Av. Instituto Tecnologico s/n, Metepec, Estado de Mexico 52140 (Mexico)]. e-mail: asegovia@nuclear.inin.mx

    2008-07-01

    The objective of this article is the presentation of an inspection system to mobile robot navigating in exteriors by means of the employment of a feedback of instantaneous guidance with respect to a global reference throughout moment of the displacement. The robot evolves obeying the commands coming from the one tele operator which indicates the diverse addresses by means of the operation console that the robot should take using for it information provided by an electronic compass. The mobile robot employee in the experimentations is a Pioneer 3-AT, which counts with a sensor series required to obtain an operation of more autonomy. The electronic compass offers geographical information coded in a format SPI, reason for which a micro controller ({mu}C) economic of general use has been an employee for to transfer the information to the format RS-232, originally used by the Pioneer 3-AT. The orientation information received by the robot by means of their serial port RS-232 secondary it is forwarded to the computer hostess in the one which a program Java is used to generate the commands for the robot navigation control and to deploy one graphic interface user utilized to receive the order of the operator. This research is part of an ambitious project in which it is tried to count on an inspection system and monitoring of sites in which risks of high radiation levels could exist, thus a navigation systems in exteriors could be very useful. The complete system will count besides the own sensors of the robot, with certain numbers of agree sensors to the variables that are desired to monitor. The resulting values of such measurements will be visualized in real time in the graphic interface user, thanks to a bidirectional wireless communication among the station of operation and the mobile robot. (Author)

  5. Evolutionary programming-based univector field navigation method for past mobile robots.

    Science.gov (United States)

    Kim, Y J; Kim, J H; Kwon, D S

    2001-01-01

    Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.

  6. Autonomous navigation system for mobile robots of inspection

    International Nuclear Information System (INIS)

    Angulo S, P.; Segovia de los Rios, A.

    2005-01-01

    One of the goals in robotics is the human personnel's protection that work in dangerous areas or of difficult access, such it is the case of the nuclear industry where exist areas that, for their own nature, they are inaccessible for the human personnel, such as areas with high radiation level or high temperatures; it is in these cases where it is indispensable the use of an inspection system that is able to carry out a sampling of the area in order to determine if this areas can be accessible for the human personnel. In this situation it is possible to use an inspection system based on a mobile robot, of preference of autonomous navigation, for the realization of such inspection avoiding by this way the human personnel's exposure. The present work proposes a model of autonomous navigation for a mobile robot Pioneer 2-D Xe based on the algorithm of wall following using the paradigm of fuzzy logic. (Author)

  7. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  8. Mobile Robot Navigation Based on Q-Learning Technique

    Directory of Open Access Journals (Sweden)

    Lazhar Khriji

    2011-03-01

    Full Text Available This paper shows how Q-learning approach can be used in a successful way to deal with the problem of mobile robot navigation. In real situations where a large number of obstacles are involved, normal Q-learning approach would encounter two major problems due to excessively large state space. First, learning the Q-values in tabular form may be infeasible because of the excessive amount of memory needed to store the table. Second, rewards in the state space may be so sparse that with random exploration they will only be discovered extremely slowly. In this paper, we propose a navigation approach for mobile robot, in which the prior knowledge is used within Q-learning. We address the issue of individual behavior design using fuzzy logic. The strategy of behaviors based navigation reduces the complexity of the navigation problem by dividing them in small actions easier for design and implementation. The Q-Learning algorithm is applied to coordinate between these behaviors, which make a great reduction in learning convergence times. Simulation and experimental results confirm the convergence to the desired results in terms of saved time and computational resources.

  9. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.

    Science.gov (United States)

    Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning

    2018-03-16

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  10. Mapping, Navigation, and Learning for Off-Road Traversal

    DEFF Research Database (Denmark)

    Konolige, Kurt; Agrawal, Motilal; Blas, Morten Rufus

    2009-01-01

    The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision......, online terrain traversability learning, visual odometry, map registration, planning, and control. At the end of 3 years, the system we developed outperformed all nine other teams in final blind tests over previously unseen terrain.......The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision...

  11. Visual identification and similarity measures used for on-line motion planning of autonomous robots in unknown environments

    Science.gov (United States)

    Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar

    2017-02-01

    In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.

  12. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    Science.gov (United States)

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  13. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images

    Directory of Open Access Journals (Sweden)

    Lingyan Ran

    2017-06-01

    Full Text Available Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN, trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  14. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    Science.gov (United States)

    Dou, Lihua; Su, Zhong; Liu, Ning

    2018-01-01

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515

  15. Navigation system for robot-assisted intra-articular lower-limb fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Köhler, Paul; Morad, Samir; Atkins, Roger; Dogramadzi, Sanja

    2016-10-01

    In the surgical treatment for lower-leg intra-articular fractures, the fragments have to be positioned and aligned to reconstruct the fractured bone as precisely as possible, to allow the joint to function correctly again. Standard procedures use 2D radiographs to estimate the desired reduction position of bone fragments. However, optimal correction in a 3D space requires 3D imaging. This paper introduces a new navigation system that uses pre-operative planning based on 3D CT data and intra-operative 3D guidance to virtually reduce lower-limb intra-articular fractures. Physical reduction in the fractures is then performed by our robotic system based on the virtual reduction. 3D models of bone fragments are segmented from CT scan. Fragments are pre-operatively visualized on the screen and virtually manipulated by the surgeon through a dedicated GUI to achieve the virtual reduction in the fracture. Intra-operatively, the actual position of the bone fragments is provided by an optical tracker enabling real-time 3D guidance. The motion commands for the robot connected to the bone fragment are generated, and the fracture physically reduced based on the surgeon's virtual reduction. To test the system, four femur models were fractured to obtain four different distal femur fracture types. Each one of them was subsequently reduced 20 times by a surgeon using our system. The navigation system allowed an orthopaedic surgeon to virtually reduce the fracture with a maximum residual positioning error of [Formula: see text] (translational) and [Formula: see text] (rotational). Correspondent physical reductions resulted in an accuracy of 1.03 ± 0.2 mm and [Formula: see text], when the robot reduced the fracture. Experimental outcome demonstrates the accuracy and effectiveness of the proposed navigation system, presenting a fracture reduction accuracy of about 1 mm and [Formula: see text], and meeting the clinical requirements for distal femur fracture reduction procedures.

  16. Neural Network Based Reactive Navigation for Mobile Robot in Dynamic Environment

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, S.; Ripel, T.

    2013-01-01

    Roč. 198, č. 2013 (2013), s. 108-113 ISSN 1012-0394 Institutional research plan: CEZ:AV0Z20760514 Institutional support: RVO:61388998 Keywords : mobile robot * reactive navigation * artificial neural networks Subject RIV: JD - Computer Applications, Robotics

  17. Visual Guided Navigation

    National Research Council Canada - National Science Library

    Banks, Martin

    1999-01-01

    .... Similarly, the problem of visual navigation is the recovery of an observer's self-motion with respect to the environment from the moving pattern of light reaching the eyes and the complex of extra...

  18. FroboMind, proposing a conceptual architecture for agricultural field robot navigation

    DEFF Research Database (Denmark)

    Jensen, Kjeld; Bøgild, Anders; Nielsen, Søren Hundevadt

    2011-01-01

    The aim of this work is to propose a conceptual system architecture Field Robot Cognitive System Architecture (FroboMind). which can provide the flexibility and extend ability required for further research and development within cognition based navigation of plant nursing robots....

  19. Image-based navigation for a robotized flexible endoscope

    NARCIS (Netherlands)

    van der Stap, N.; Slump, Cornelis H.; Broeders, Ivo Adriaan Maria Johannes; van der Heijden, Ferdinand; Luo, Xiongbiao; Reichl, Tobias; Mirota, Daniel; Soper, Timothy

    2014-01-01

    Robotizing flexible endoscopy enables image-based control of endoscopes. Especially during high-throughput procedures, such as a colonoscopy, navigation support algorithms could improve procedure turnaround and ergonomics for the endoscopist. In this study, we have developed and implemented a

  20. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    Directory of Open Access Journals (Sweden)

    Xu Zhao

    2018-03-01

    Full Text Available A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS Inertial-Measurement-Unit (IMU. First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD. In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  1. Evolutionary Fuzzy Control and Navigation for Two Wheeled Robots Cooperatively Carrying an Object in Unknown Environments.

    Science.gov (United States)

    Juang, Chia-Feng; Lai, Min-Ge; Zeng, Wan-Ting

    2015-09-01

    This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.

  2. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Science.gov (United States)

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  3. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2011-02-01

    Full Text Available Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN. Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  4. Particle Filter for Fault Diagnosis and Robust Navigation of Underwater Robot

    DEFF Research Database (Denmark)

    Zhao, Bo; Skjetne, Roger; Blanke, Mogens

    2014-01-01

    A particle filter based robust navigation with fault diagnosis is designed for an underwater robot, where 10 failure modes of sensors and thrusters are considered. The nominal underwater robot and its anomaly are described by a switchingmode hidden Markov model. By extensively running a particle...... filter on the model, the fault diagnosis and robust navigation are achieved. Closed-loop full-scale experimental results show that the proposed method is robust, can diagnose faults effectively, and can provide good state estimation even in cases where multiple faults occur. Comparing with other methods...

  5. ROBERT autonomous navigation robot with artificial vision

    International Nuclear Information System (INIS)

    Cipollini, A.; Meo, G.B.; Nanni, V.; Rossi, L.; Taraglio, S.; Ferjancic, C.

    1993-01-01

    This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail

  6. Dynamic Parameter Update for Robot Navigation Systems through Unsupervised Environmental Situational Analysis

    OpenAIRE

    Shantia, Amirhossein; Bidoia, Francesco; Schomaker, Lambert; Wiering, Marco

    2017-01-01

    A robot’s local navigation is often done through forward simulation of robot velocities and measuring the possible trajectories against safety, distance to the final goal and the generated path of a global path planner. Then, the computed velocities vector for the winning trajectory is executed on the robot. This process is done continuously through the whole navigation process and requires an extensive amount of processing. This only allows for a very limited sampling space. In this paper, w...

  7. Percutaneous Sacroiliac Screw Placement: A Prospective Randomized Comparison of Robot-assisted Navigation Procedures with a Conventional Technique

    Directory of Open Access Journals (Sweden)

    Jun-Qiang Wang

    2017-01-01

    Conclusions: Accuracy of the robot-assisted technique was superior to that of the freehand technique. Robot-assisted navigation is safe for unstable posterior pelvic ring stabilization, especially in S1, but also in S2. SI screw insertion with robot-assisted navigation is clinically feasible.

  8. A biologically inspired meta-control navigation system for the Psikharpax rat robot

    International Nuclear Information System (INIS)

    Caluwaerts, K; Staffa, M; N’Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M

    2012-01-01

    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment—recognized as new contexts—and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics. (paper)

  9. Real-Time Motion Planning and Safe Navigation in Dynamic Multi-Robot Environments

    National Research Council Canada - National Science Library

    Bruce, James R

    2006-01-01

    .... While motion planning has been used for high level robot navigation, or limited to semi-static or single-robot domains, it has often been dismissed for the real-time low-level control of agents due...

  10. Maps managing interface design for a mobile robot navigation governed by a BCI

    International Nuclear Information System (INIS)

    Auat Cheein, Fernando A; Carelli, Ricardo; Celeste, Wanderley Cardoso; Freire Bastos, Teodiano; Di Sciascio, Fernando

    2007-01-01

    In this paper, a maps managing interface is proposed. This interface is governed by a Brain Computer Interface (BCI), which also governs a mobile robot's movements. If a robot is inside a known environment, the user can load a map from the maps managing interface in order to navigate it. Otherwise, if the robot is in an unknown environment, a Simultaneous Localization and Mapping (SLAM) algorithm is released in order to obtain a probabilistic grid map of that environment. Then, that map is loaded into the map database for future navigations. While slamming, the user has a direct control of the robot's movements via the BCI. The complete system is applied to a mobile robot and can be also applied to an autonomous wheelchair, which has the same kinematics. Experimental results are also shown

  11. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  12. Image-Based Particle Filtering For Robot Navigation In A Maize Field

    NARCIS (Netherlands)

    Hiremath, S.; Evert, van F.K.; Heijden, van der G.W.A.M.; Braak, ter C.J.F.; Stein, A.

    2012-01-01

    Autonomous navigation of a robot in an agricultural field is a challenge as the robot is in an environment with many sources of noise. This includes noise due to uneven terrain, varying shapes, sizes and colors of the plants, imprecise sensor measurements and effects due to wheel-slippage. The

  13. Reactive, Safe Navigation for Lunar and Planetary Robots

    Science.gov (United States)

    Utz, Hans; Ruland, Thomas

    2008-01-01

    When humans return to the moon, Astronauts will be accompanied by robotic helpers. Enabling robots to safely operate near astronauts on the lunar surface has the potential to significantly improve the efficiency of crew surface operations. Safely operating robots in close proximity to astronauts on the lunar surface requires reactive obstacle avoidance capabilities not available on existing planetary robots. In this paper we present work on safe, reactive navigation using a stereo based high-speed terrain analysis and obstacle avoidance system. Advances in the design of the algorithms allow it to run terrain analysis and obstacle avoidance algorithms at full frame rate (30Hz) on off the shelf hardware. The results of this analysis are fed into a fast, reactive path selection module, enforcing the safety of the chosen actions. The key components of the system are discussed and test results are presented.

  14. Maps managing interface design for a mobile robot navigation governed by a BCI

    Energy Technology Data Exchange (ETDEWEB)

    Auat Cheein, Fernando A [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina); Carelli, Ricardo [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina); Celeste, Wanderley Cardoso [Electrical Engineering Department, Federal University of Espirito Santo. Fernando Ferrari, 514 29075-910 Vitoria-ES (Brazil); Freire Bastos, Teodiano [Electrical Engineering Department, Federal University of Espirito Santo. Fernando Ferrari, 514 29075-910 Vitoria-ES (Brazil); Di Sciascio, Fernando [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina)

    2007-11-15

    In this paper, a maps managing interface is proposed. This interface is governed by a Brain Computer Interface (BCI), which also governs a mobile robot's movements. If a robot is inside a known environment, the user can load a map from the maps managing interface in order to navigate it. Otherwise, if the robot is in an unknown environment, a Simultaneous Localization and Mapping (SLAM) algorithm is released in order to obtain a probabilistic grid map of that environment. Then, that map is loaded into the map database for future navigations. While slamming, the user has a direct control of the robot's movements via the BCI. The complete system is applied to a mobile robot and can be also applied to an autonomous wheelchair, which has the same kinematics. Experimental results are also shown.

  15. Volunteers Oriented Interface Design for the Remote Navigation of Rescue Robots at Large-Scale Disaster Sites

    Science.gov (United States)

    Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi

    This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.

  16. Development of a force-reflecting robotic platform for cardiac catheter navigation.

    Science.gov (United States)

    Park, Jun Woo; Choi, Jaesoon; Pak, Hui-Nam; Song, Seung Joon; Lee, Jung Chan; Park, Yongdoo; Shin, Seung Min; Sun, Kyung

    2010-11-01

    Electrophysiological catheters are used for both diagnostics and clinical intervention. To facilitate more accurate and precise catheter navigation, robotic cardiac catheter navigation systems have been developed and commercialized. The authors have developed a novel force-reflecting robotic catheter navigation system. The system is a network-based master-slave configuration having a 3-degree of freedom robotic manipulator for operation with a conventional cardiac ablation catheter. The master manipulator implements a haptic user interface device with force feedback using a force or torque signal either measured with a sensor or estimated from the motor current signal in the slave manipulator. The slave manipulator is a robotic motion control platform on which the cardiac ablation catheter is mounted. The catheter motions-forward and backward movements, rolling, and catheter tip bending-are controlled by electromechanical actuators located in the slave manipulator. The control software runs on a real-time operating system-based workstation and implements the master/slave motion synchronization control of the robot system. The master/slave motion synchronization response was assessed with step, sinusoidal, and arbitrarily varying motion commands, and showed satisfactory performance with insignificant steady-state motion error. The current system successfully implemented the motion control function and will undergo safety and performance evaluation by means of animal experiments. Further studies on the force feedback control algorithm and on an active motion catheter with an embedded actuation mechanism are underway. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  17. Posture estimation for autonomous weeding robots navigation in nursery tree plantations

    DEFF Research Database (Denmark)

    Khot, Law Ramchandra; Tang, Lie; Blackmore, Simon

    2005-01-01

    errors of the system, in x and y direction for all the four lines. Further, it could also be stated that the errors were observed more in the direction of travel of the robot. When robot was navigated through the poles, the positioning accuracy of the system increased after filtering. The accuracy...

  18. Path Planning and Navigation for Mobile Robots in a Hybrid Sensor Network without Prior Location Information

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-03-01

    Full Text Available In a hybrid wireless sensor network with mobile and static nodes, which have no prior geographical knowledge, successful navigation for mobile robots is one of the main challenges. In this paper, we propose two novel navigation algorithms for outdoor environments, which permit robots to travel from one static node to another along a planned path in the sensor field, namely the RAC and the IMAP algorithms. Using this, the robot can navigate without the help of a map, GPS or extra sensor modules, only using the received signal strength indication (RSSI and odometry. Therefore, our algorithms have the advantage of being cost-effective. In addition, a path planning algorithm to schedule mobile robots' travelling paths is presented, which focuses on shorter distances and robust paths for robots by considering the RSSI-Distance characteristics. The simulations and experiments conducted with an autonomous mobile robot show the effectiveness of the proposed algorithms in an outdoor environment.

  19. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  20. Image Based Solution to Occlusion Problem for Multiple Robots Navigation

    Directory of Open Access Journals (Sweden)

    Taj Mohammad Khan

    2012-04-01

    Full Text Available In machine vision, occlusions problem is always a challenging issue in image based mapping and navigation tasks. This paper presents a multiple view vision based algorithm for the development of occlusion-free map of the indoor environment. The map is assumed to be utilized by the mobile robots within the workspace. It has wide range of applications, including mobile robot path planning and navigation, access control in restricted areas, and surveillance systems. We used wall mounted fixed camera system. After intensity adjustment and background subtraction of the synchronously captured images, the image registration was performed. We applied our algorithm on the registered images to resolve the occlusion problem. This technique works well even in the existence of total occlusion for a longer period.

  1. Mobile Robot Navigation in a Corridor Using Visual Odometry

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Andersen, Nils Axel; Poulsen, Niels Kjølstad

    2009-01-01

    Incorporation of computer vision into mobile robot localization is studied in this work. It includes the generation of localization information from raw images and its fusion with the odometric pose estimation. The technique is then implemented on a small mobile robot operating at a corridor...

  2. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  3. Autonomous navigation system for mobile robots of inspection; Sistema de navegacion autonoma para robots moviles de inspeccion

    Energy Technology Data Exchange (ETDEWEB)

    Angulo S, P. [ITT, Metepec, Estado de Mexico (Mexico); Segovia de los Rios, A. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: pedrynteam@hotmail.com

    2005-07-01

    One of the goals in robotics is the human personnel's protection that work in dangerous areas or of difficult access, such it is the case of the nuclear industry where exist areas that, for their own nature, they are inaccessible for the human personnel, such as areas with high radiation level or high temperatures; it is in these cases where it is indispensable the use of an inspection system that is able to carry out a sampling of the area in order to determine if this areas can be accessible for the human personnel. In this situation it is possible to use an inspection system based on a mobile robot, of preference of autonomous navigation, for the realization of such inspection avoiding by this way the human personnel's exposure. The present work proposes a model of autonomous navigation for a mobile robot Pioneer 2-D Xe based on the algorithm of wall following using the paradigm of fuzzy logic. (Author)

  4. Visual dataflow language for educational robots programming

    OpenAIRE

    ZIMIN G.A.; MORDVINOV D.A.

    2016-01-01

    Visual domain-specific languages usually have low entry barrier. Sometimes even children can program on such languages by working with visual representations. This is widely used in educational robotics domain, where most commonly used programming environments are visual. The paper describes a novel dataflow visual programming environment for embedded robotic platforms. Obviously, complex dataflow languages are not simple for understanding. The purpose of our tool is to "bridge" between light...

  5. AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation

    Directory of Open Access Journals (Sweden)

    Xin Yuan

    2017-05-01

    Full Text Available In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure.

  6. Visual Navigation of Complex Information Spaces

    Directory of Open Access Journals (Sweden)

    Sarah North

    1995-11-01

    Full Text Available The authors lay the foundation for the introduction of visual navigation aid to assist computer users in direct manipulation of the complex information spaces. By exploring present research on scientific data visualisation and creating a case for improved information visualisation tools, they introduce the design of an improved information visualisation interface utilizing dynamic slider, called Visual-X, incorporating icons with bindable attributes (glyphs. Exploring the improvement that these data visualisations, make to a computing environment, the authors conduct an experiment to compare the performance of subjects who use traditional interfaces and Visual-X. Methodology is presented and conclusions reveal that the use of Visual-X appears to be a promising approach in providing users with a navigation tool that does not overload their cognitive processes.

  7. Navigating nuclear science: Enhancing analysis through visualization

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H.; Berkel, J. van; Johnson, D.K.; Wylie, B.N.

    1997-09-01

    Data visualization is an emerging technology with high potential for addressing the information overload problem. This project extends the data visualization work of the Navigating Science project by coupling it with more traditional information retrieval methods. A citation-derived landscape was augmented with documents using a text-based similarity measure to show viability of extension into datasets where citation lists do not exist. Landscapes, showing hills where clusters of similar documents occur, can be navigated, manipulated and queried in this environment. The capabilities of this tool provide users with an intuitive explore-by-navigation method not currently available in today`s retrieval systems.

  8. A New Classification Technique in Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Bambang Tutuko

    2011-12-01

    Full Text Available This paper presents a novel pattern recognition algorithm that use weightless neural network (WNNs technique.This technique plays a role of situation classifier to judge the situation around the mobile robot environment and makes control decision in mobile robot navigation. The WNNs technique is choosen due to significant advantages over conventional neural network, such as they can be easily implemented in hardware using standard RAM, faster in training phase and work with small resources. Using a simple classification algorithm, the similar data will be grouped with each other and it will be possible to attach similar data classes to specific local areas in the mobile robot environment. This strategy is demonstrated in simple mobile robot powered by low cost microcontrollers with 512 bytes of RAM and low cost sensors. Experimental result shows, when number of neuron increases the average environmental recognition ratehas risen from 87.6% to 98.5%.The WNNs technique allows the mobile robot to recognize many and different environmental patterns and avoid obstacles in real time. Moreover, by using proposed WNNstechnique mobile robot has successfully reached the goal in dynamic environment compare to fuzzy logic technique and logic function, capable of dealing with uncertainty in sensor reading, achieving good performance in performing control actions with 0.56% error rate in mobile robot speed.

  9. Optical Flow based Robot Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Kahlouche Souhila

    2008-11-01

    Full Text Available In this paper we try to develop an algorithm for visual obstacle avoidance of autonomous mobile robot. The input of the algorithm is an image sequence grabbed by an embedded camera on the B21r robot in motion. Then, the optical flow information is extracted from the image sequence in order to be used in the navigation algorithm. The optical flow provides very important information about the robot environment, like: the obstacles disposition, the robot heading, the time to collision and the depth. The strategy consists in balancing the amount of left and right side flow to avoid obstacles, this technique allows robot navigation without any collision with obstacles. The robustness of the algorithm will be showed by some examples.

  10. Implementation of a Mobile Robot Platform Navigating in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Belaidi Hadjira

    2017-01-01

    Full Text Available Currently, problems of autonomous wheeled mobile robots in unknown environments are great challenge. Obstacle avoidance and path planning are the back bone of autonomous control as it makes robot able to reach its destination without collision. Dodging obstacles in dynamic and uncertain environment is the most complex part of obstacle avoidance and path planning tasks. This work deals with the implementation of an easy approach of static and dynamic obstacles avoidance. The robot starts by executing a free optimal path loaded into its controller; then, it uses its sensors to avoid the unexpected obstacles which may occur in that path during navigation.

  11. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  12. Navigasi Berbasis Behavior dan Fuzzy Logic pada Simulasi Robot Bergerak Otonom

    Directory of Open Access Journals (Sweden)

    Rendyansyah

    2016-03-01

    Full Text Available Mobile robot is the robotic mechanism that is able to moved automatically. The movement of the robot automatically require a navigation system. Navigation is a method for determining the robot motion. In this study, using a method developed robot navigation behavior with fuzzy logic. The behavior of the robot is divided into several modules, such as walking, avoid obstacles, to follow walls, corridors and conditions of u-shape. In this research designed mobile robot simulation in a visual programming. Robot is equipped with seven distance sensor and divided into several groups to test the behavior that is designed, so that the behavior of the robot generate speed and steering control. Based on experiments that have been conducted shows that mobile robot simulation can run smooth on many conditions. This proves that the implementation of the formation of behavior and fuzzy logic techniques on the robot working well

  13. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  14. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2005-09-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  15. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  16. ANFIS -Based Navigation for HVAC Service Robot with Image Processing

    International Nuclear Information System (INIS)

    Salleh, Mohd Zoolfadli Md; Rashid, Nahrul Khair Alang Md; Mustafah, Yasir Mohd

    2013-01-01

    In this paper, we present an ongoing work on the autonomous navigation of a mobile service robot for Heat, Ventilation and Air Condition (HVAC) ducting. CCD camera mounted on the front-end of our robot is used to analyze the ducts openings (blob analysis) in order to differentiate them from other landmarks (blower fan, air outlets and etc). Distance between the robot and duct openings is measured using ultrasonic sensor. Controller chosen is ANFIS where its architecture accepts three inputs; recognition of duct openings, robot positions and distance while the outputs is maneuver direction (left or right).45 membership functions are created from which produces 46 training epochs. In order to demonstrate the functionality of the system, a working prototype is developed and tested inside HVAC ducting in ROBOCON Lab, IIUM

  17. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  18. Satellite Imagery Assisted Road-Based Visual Navigation System

    Science.gov (United States)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  19. Sensor fusion for mobile robot navigation

    International Nuclear Information System (INIS)

    Kam, M.; Zhu, X.; Kalata, P.

    1997-01-01

    The authors review techniques for sensor fusion in robot navigation, emphasizing algorithms for self-location. These find use when the sensor suite of a mobile robot comprises several different sensors, some complementary and some redundant. Integrating the sensor readings, the robot seeks to accomplish tasks such as constructing a map of its environment, locating itself in that map, and recognizing objects that should be avoided or sought. The review describes integration techniques in two categories: low-level fusion is used for direct integration of sensory data, resulting in parameter and state estimates; high-level fusion is used for indirect integration of sensory data in hierarchical architectures, through command arbitration and integration of control signals suggested by different modules. The review provides an arsenal of tools for addressing this (rather ill-posed) problem in machine intelligence, including Kalman filtering, rule-based techniques, behavior based algorithms and approaches that borrow from information theory, Dempster-Shafer reasoning, fuzzy logic and neural networks. It points to several further-research needs, including: robustness of decision rules; simultaneous consideration of self-location, motion planning, motion control and vehicle dynamics; the effect of sensor placement and attention focusing on sensor fusion; and adaptation of techniques from biological sensor fusion

  20. 3-D world modeling based on combinatorial geometry for autonomous robot navigation

    International Nuclear Information System (INIS)

    Goldstein, M.; Pin, F.G.; De Saussure, G.; Weisbin, C.R.

    1987-01-01

    In applications of robotics to surveillance and mapping at nuclear facilities the scene to be described is three-dimensional. Using range data a 3-D model of the environment can be built. First, each measured point on the object surface is surrounded by a solid sphere with a radius determined by the range to that point. Then the 3-D shapes of the visible surfaces are obtained by taking the (Boolean) union of the spheres. Using this representation distances to boundary surfaces can be efficiently calculated. This feature is particularly useful for navigation purposes. The efficiency of the proposed approach is illustrated by a simulation of a spherical robot navigating in a 3-D room with static obstacles

  1. Dynamic Mobile RobotNavigation Using Potential Field Based Immune Network

    Directory of Open Access Journals (Sweden)

    Guan-Chun Luh

    2007-04-01

    Full Text Available This paper proposes a potential filed immune network (PFIN for dynamic navigation of mobile robots in an unknown environment with moving obstacles and fixed/moving targets. The Velocity Obstacle method is utilized to determine imminent obstacle collision of a robot moving in the time-varying environment. The response of the overall immune network is derived by the aid of fuzzy system. Simulation results are presented to verify the effectiveness of the proposed methodology in unknown environments with single and multiple moving obstacles

  2. A PSO-Optimized Reciprocal Velocity Obstacles Algorithm for Navigation of Multiple Mobile Robots

    Directory of Open Access Journals (Sweden)

    Ziyad Allawi

    2015-03-01

    Full Text Available In this paper, a new optimization method for the Reciprocal Velocity Obstacles (RVO is proposed. It uses the well-known Particle Swarm Optimization (PSO for navigation control of multiple mobile robots with kinematic constraints. The RVO is used for collision avoidance between the robots, while PSO is used to choose the best path for the robot maneuver to avoid colliding with other robots and to get to its goal faster. This method was applied on 24 mobile robots facing each other. Simulation results have shown that this method outperforms the ordinary RVO when the path is heuristically chosen.

  3. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  4. People Detection Based on Spatial Mapping of Friendliness and Floor Boundary Points for a Mobile Navigation Robot

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Tasaki

    2011-01-01

    Full Text Available Navigation robots must single out partners requiring navigation and move in the cluttered environment where people walk around. Developing such robots requires two different people detections: detecting partners and detecting all moving people around the robots. For detecting partners, we design divided spaces based on the spatial relationships and sensing ranges. Mapping the friendliness of each divided space based on the stimulus from the multiple sensors to detect people calling robots positively, robots detect partners on the highest friendliness space. For detecting moving people, we regard objects’ floor boundary points in an omnidirectional image as obstacles. We classify obstacles as moving people by comparing movement of each point with robot movement using odometry data, dynamically changing thresholds to detect. Our robot detected 95.0% of partners while it stands by and interacts with people and detected 85.0% of moving people while robot moves, which was four times higher than previous methods did.

  5. Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots

    OpenAIRE

    Pérez Sala, Xavier

    2010-01-01

    We propose a robust system for automatic Robot Navigation in uncontrolled en- vironments. The system is composed by three main modules: the Arti cial Vision module, the Reinforcement Learning module, and the behavior control module. The aim of the system is to allow a robot to automatically nd a path that arrives to a pre xed goal. Turn and straight movements in uncontrolled environments are automatically estimated and controlled using the proposed modules. The Arti cial Vi...

  6. Learning probabilistic features for robotic navigation using laser sensors.

    Directory of Open Access Journals (Sweden)

    Fidel Aznar

    Full Text Available SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N to O(N(2, where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  7. Learning probabilistic features for robotic navigation using laser sensors.

    Science.gov (United States)

    Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  8. Experiments in teleoperator and autonomous control of space robotic vehicles

    Science.gov (United States)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  9. Navigation neuro-floue d'un robot mobile dans un environment ...

    African Journals Online (AJOL)

    Navigation neuro-floue d'un robot mobile dans un environment inconnu avec un apprentissage par renforcement. W Nouibat, Z A Foitih, F A Haouari. Abstract. No Abstract. Technologies Avancess Vol. 16 2003: pp. 19-30. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  10. Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics.

    Science.gov (United States)

    Srinivasan, Mandyam V

    2011-04-01

    Research over the past century has revealed the impressive capacities of the honeybee, Apis mellifera, in relation to visual perception, flight guidance, navigation, and learning and memory. These observations, coupled with the relative ease with which these creatures can be trained, and the relative simplicity of their nervous systems, have made honeybees an attractive model in which to pursue general principles of sensorimotor function in a variety of contexts, many of which pertain not just to honeybees, but several other animal species, including humans. This review begins by describing the principles of visual guidance that underlie perception of the world in three dimensions, obstacle avoidance, control of flight speed, and orchestrating smooth landings. We then consider how navigation over long distances is accomplished, with particular reference to how bees use information from the celestial compass to determine their flight bearing, and information from the movement of the environment in their eyes to gauge how far they have flown. Finally, we illustrate how some of the principles gleaned from these studies are now being used to design novel, biologically inspired algorithms for the guidance of unmanned aerial vehicles.

  11. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots.

    Science.gov (United States)

    Sherwin, Tyrone; Easte, Mikala; Chen, Andrew Tzer-Yeu; Wang, Kevin I-Kai; Dai, Wenbin

    2018-02-14

    Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.

  12. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots

    Directory of Open Access Journals (Sweden)

    Tyrone Sherwin

    2018-02-01

    Full Text Available Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.

  13. Survey of computer vision technology for UVA navigation

    Science.gov (United States)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are

  14. Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory

    Directory of Open Access Journals (Sweden)

    Eduardo Perdices

    2013-01-01

    Full Text Available Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people’s homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios.

  15. Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory

    Science.gov (United States)

    Vega, Julio; Perdices, Eduardo; Cañas, José M.

    2013-01-01

    Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333

  16. Autonomous Vehicles Navigation with Visual Target Tracking: Technical Approaches

    Directory of Open Access Journals (Sweden)

    Zhen Jia

    2008-12-01

    Full Text Available This paper surveys the developments of last 10 years in the area of vision based target tracking for autonomous vehicles navigation. First, the motivations and applications of using vision based target tracking for autonomous vehicles navigation are presented in the introduction section. It can be concluded that it is very necessary to develop robust visual target tracking based navigation algorithms for the broad applications of autonomous vehicles. Then this paper reviews the recent techniques in three different categories: vision based target tracking for the applications of land, underwater and aerial vehicles navigation. Next, the increasing trends of using data fusion for visual target tracking based autonomous vehicles navigation are discussed. Through data fusion the tracking performance is improved and becomes more robust. Based on the review, the remaining research challenges are summarized and future research directions are investigated.

  17. A Dataset for Visual Navigation with Neuromorphic Methods

    Directory of Open Access Journals (Sweden)

    Francisco eBarranco

    2016-02-01

    Full Text Available Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

  18. Visual perception system and method for a humanoid robot

    Science.gov (United States)

    Wells, James W. (Inventor); Mc Kay, Neil David (Inventor); Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  19. Visual servo simulation of EAST articulated maintenance arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Pan, Hongtao; Cheng, Yong; Feng, Hansheng [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Wu, Huapeng [Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland)

    2016-03-15

    For the inspection and light-duty maintenance of the vacuum vessel in the EAST tokamak, a serial robot arm, called EAST articulated maintenance arm, is developed. Due to the 9-m-long cantilever arm, the large flexibility of the EAMA robot introduces a problem in the accurate positioning. This article presents an autonomous robot control to cope with the robot positioning problem, which is a visual servo approach in context of tile grasping for the EAMA robot. In the experiments, the proposed method was implemented in a simulation environment to position and track a target graphite tile with the EAMA robot. As a result, the proposed visual control scheme can successfully drive the EAMA robot to approach and track the target tile until the robot reaches the desired position. Furthermore, the functionality of the simulation software presented in this paper is proved to be suitable for the development of the robotic and computer vision application.

  20. Visual servo simulation of EAST articulated maintenance arm robot

    International Nuclear Information System (INIS)

    Yang, Yang; Song, Yuntao; Pan, Hongtao; Cheng, Yong; Feng, Hansheng; Wu, Huapeng

    2016-01-01

    For the inspection and light-duty maintenance of the vacuum vessel in the EAST tokamak, a serial robot arm, called EAST articulated maintenance arm, is developed. Due to the 9-m-long cantilever arm, the large flexibility of the EAMA robot introduces a problem in the accurate positioning. This article presents an autonomous robot control to cope with the robot positioning problem, which is a visual servo approach in context of tile grasping for the EAMA robot. In the experiments, the proposed method was implemented in a simulation environment to position and track a target graphite tile with the EAMA robot. As a result, the proposed visual control scheme can successfully drive the EAMA robot to approach and track the target tile until the robot reaches the desired position. Furthermore, the functionality of the simulation software presented in this paper is proved to be suitable for the development of the robotic and computer vision application.

  1. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  2. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    Directory of Open Access Journals (Sweden)

    Xingguang Duan

    2018-01-01

    Full Text Available In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.

  3. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    Science.gov (United States)

    Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948

  4. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  5. A Sensor Based Navigation Algorithm for a Mobile Robot using the DVFF Approach

    Directory of Open Access Journals (Sweden)

    A. OUALID DJEKOUNE

    2009-06-01

    Full Text Available Often autonomous mobile robots operate in environment for which prior maps are incomplete or inaccurate. They require the safe execution for a collision free motion to a goal position. This paper addresses a complete navigation method for a mobile robot that moves in unknown environment. Thus, a novel method called DVFF combining the Virtual Force Field (VFF obstacle avoidance approach and global path planning based on D* algorithm is proposed. While D* generates global path information towards a goal position, the VFF local controller generates the admissible trajectories that ensure safe robot motion. Results and analysis from a battery of experiments with this new method implemented on a ATRV2 mobile robot are shown.

  6. Robotic Oncological Surgery: Technology That's Here to Stay?

    Directory of Open Access Journals (Sweden)

    HRH Patel

    2009-09-01

    Full Text Available A robot functioning in an environment may exhibit various forms of behavior emerge from the interaction with its environment through sense, control and plan activities. Hence, this paper introduces a behaviour selection based navigation and obstacle avoidance algorithm with effective method for adapting robotic behavior according to the environment conditions and the navigated terrain. The developed algorithm enable the robot to select the suitable behavior in real-time to avoid obstacles based on sensory information through visual and ultrasonic sensors utilizing the robot's ability to step over obstacles, and move between surfaces of different heights. In addition, it allows the robot to react in appropriate manner to the changing conditions either by fine-tuning of behaviors or by selecting different set of behaviors to increase the efficiency of the robot over time. The presented approach has been demonstrated on quadruped robot in several different experimental environments and the paper provides an analysis of its performance.

  7. Box jellyfish use terrestrial visual cues for navigation

    DEFF Research Database (Denmark)

    Garm, Anders; Oskarsson, Magnus; Nilsson, Dan-Eric

    2011-01-01

    been a puzzle why they need such a complex set of eyes. Here we report that medusae of the box jellyfish Tripedalia cystophora are capable of visually guided navigation in mangrove swamps using terrestrial structures seen through the water surface. They detect the mangrove canopy by an eye type...... that is specialized to peer up through the water surface and that is suspended such that it is constantly looking straight up, irrespective of the orientation of the jellyfish. The visual information is used to navigate to the preferred habitat at the edge of mangrove lagoons....

  8. 2D navigation and pilotage of an autonomous mobile robot

    International Nuclear Information System (INIS)

    Favre, Patrick

    1989-01-01

    The contribution of this thesis deals with the navigation and the piloting of an autonomous robot, in a known or weakly known environment of dimension two without constraints. This leads to generate an optimal path to a given goal and then to compute the commands to follow this path. Several constraints are taken into account (obstacles, geometry and kinematic of the robot, dynamic effects). The first part defines the problem and presents the state of the art. The three following parts present a set of complementary solutions according to the knowledge level of the environment and to the space constraints: - Case of a known environment: generation and following of a trajectory with respect to given path points. - Case of a weakly known environment: coupling of a command module interacting with the environment perception, and a path planner. This allows a fast motion of the robot. - Case of a constrained environment: planner enabling the taking into account of many constraints as the robot's shape, turning radius limitation, backward motion and orientation. (author) [fr

  9. Visual-perceptual mismatch in robotic surgery.

    Science.gov (United States)

    Abiri, Ahmad; Tao, Anna; LaRocca, Meg; Guan, Xingmin; Askari, Syed J; Bisley, James W; Dutson, Erik P; Grundfest, Warren S

    2017-08-01

    The principal objective of the experiment was to analyze the effects of the clutch operation of robotic surgical systems on the performance of the operator. The relative coordinate system introduced by the clutch operation can introduce a visual-perceptual mismatch which can potentially have negative impact on a surgeon's performance. We also assess the impact of the introduction of additional tactile sensory information on reducing the impact of visual-perceptual mismatch on the performance of the operator. We asked 45 novice subjects to complete peg transfers using the da Vinci IS 1200 system with grasper-mounted, normal force sensors. The task involves picking up a peg with one of the robotic arms, passing it to the other arm, and then placing it on the opposite side of the view. Subjects were divided into three groups: aligned group (no mismatch), the misaligned group (10 cm z axis mismatch), and the haptics-misaligned group (haptic feedback and z axis mismatch). Each subject performed the task five times, during which the grip force, time of completion, and number of faults were recorded. Compared to the subjects that performed the tasks using a properly aligned controller/arm configuration, subjects with a single-axis misalignment showed significantly more peg drops (p = 0.011) and longer time to completion (p sensors showed no difference between the different groups. The visual-perceptual mismatch created by the misalignment of the robotic controls relative to the robotic arms has a negative impact on the operator of a robotic surgical system. Introduction of other sensory information and haptic feedback systems can help in potentially reducing this effect.

  10. Merge Fuzzy Visual Servoing and GPS-Based Planning to Obtain a Proper Navigation Behavior for a Small Crop-Inspection Robot.

    Science.gov (United States)

    Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela

    2016-02-24

    The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.

  11. Visual navigation using edge curve matching for pinpoint planetary landing

    Science.gov (United States)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  12. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Science.gov (United States)

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  13. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Marwah Almasri

    2015-12-01

    Full Text Available Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  14. Dynamic Parameter Update for Robot Navigation Systems through Unsupervised Environmental Situational Analysis

    NARCIS (Netherlands)

    Shantia, Amirhossein; Bidoia, Francesco; Schomaker, Lambert; Wiering, Marco

    2017-01-01

    A robot’s local navigation is often done through forward simulation of robot velocities and measuring the possible trajectories against safety, distance to the final goal and the generated path of a global path planner. Then, the computed velocities vector for the winning trajectory is executed on

  15. An online visual loop closure detection method for indoor robotic navigation

    Science.gov (United States)

    Erhan, Can; Sariyanidi, Evangelos; Sencan, Onur; Temeltas, Hakan

    2015-01-01

    In this paper, we present an enhanced loop closure method* based on image-to-image matching relies on quantized local Zernike moments. In contradistinction to the previous methods, our approach uses additional depth information to extract Zernike moments in local manner. These moments are used to represent holistic shape information inside the image. The moments in complex space that are extracted from both grayscale and depth images are coarsely quantized. In order to find out the similarity between two locations, nearest neighbour (NN) classification algorithm is performed. Exemplary results and the practical implementation case of the method are also given with the data gathered on the testbed using a Kinect. The method is evaluated in three different datasets of different lighting conditions. Additional depth information with the actual image increases the detection rate especially in dark environments. The results are referred as a successful, high-fidelity online method for visual place recognition as well as to close navigation loops, which is a crucial information for the well known simultaneously localization and mapping (SLAM) problem. This technique is also practically applicable because of its low computational complexity, and performing capability in real-time with high loop closing accuracy.

  16. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  17. Towards automated visual flexible endoscope navigation.

    Science.gov (United States)

    van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J

    2013-10-01

    The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.

  18. Self-motivated visual scanning predicts flexible navigation in a virtual environment

    Directory of Open Access Journals (Sweden)

    Elisabeth Jeannette Ploran

    2014-01-01

    Full Text Available The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.

  19. Visual navigation in adolescents with early periventricular lesions: knowing where, but not getting there.

    Science.gov (United States)

    Pavlova, Marina; Sokolov, Alexander; Krägeloh-Mann, Ingeborg

    2007-02-01

    Visual navigation in familiar and unfamiliar surroundings is an essential ingredient of adaptive daily life behavior. Recent brain imaging work helps to recognize that establishing connectivity between brain regions is of importance for successful navigation. Here, we ask whether the ability to navigate is impaired in adolescents who were born premature and suffer congenital bilateral periventricular brain damage that might affect the pathways interconnecting subcortical structures with cortex. Performance on a set of visual labyrinth tasks was significantly worse in patients with periventricular leukomalacia (PVL) as compared with premature-born controls without lesions and term-born adolescents. The ability for visual navigation inversely relates to the severity of motor disability, leg-dominated bilateral spastic cerebral palsy. This agrees with the view that navigation ability substantially improves with practice and might be compromised in individuals with restrictions in active spatial exploration. Visual navigation is negatively linked to the volumetric extent of lesions over the right parietal and frontal periventricular regions. Whereas impairments of visual processing of point-light biological motion are associated in patients with PVL with bilateral parietal periventricular lesions, navigation ability is specifically linked to the frontal lesions in the right hemisphere. We suggest that more anterior periventricular lesions impair the interrelations between the right hippocampus and cortical areas leading to disintegration of neural networks engaged in visual navigation. For the first time, we show that the severity of right frontal periventricular damage and leg-dominated motor disorders can serve as independent predictors of the visual navigation disability.

  20. Neuromorphic Audio-Visual Sensor Fusion on a Sound-Localising Robot

    Directory of Open Access Journals (Sweden)

    Vincent Yue-Sek Chan

    2012-02-01

    Full Text Available This paper presents the first robotic system featuring audio-visual sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localisation through self-motion and visual feedback, using an adaptive ITD-based sound localisation algorithm. After training, the robot can localise sound sources (white or pink noise in a reverberant environment with an RMS error of 4 to 5 degrees in azimuth. In the second part of the paper, we investigate the source binding problem. An experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. The results show that this technique can be quite effective, despite its simplicity.

  1. An Integrated Assessment of Progress in Robotic Perception and Semantic Navigation

    Science.gov (United States)

    2015-09-01

    Navigation by Craig Lennon, Barry Bodt, Marshal Childers, Jean Oh, Arne Suppe, Luis Navarro-Serment, Robert Dean, Terrence Keegan , Chip Diberardino...Directorate, ARL Jean Oh, Arne Suppe, and Luis Navarro-Serment National Robotics Engineering Center, Pittsburgh, PA Robert Dean, Terrence Keegan ...AUTHOR(S) Craig Lennon, Barry Bodt, Marshal Childers, Jean Oh, Arne Suppe, Luis Navarro-Serment, Robert Dean, Terrence Keegan , Chip Diberardino

  2. Image processing and applications based on visualizing navigation service

    Science.gov (United States)

    Hwang, Chyi-Wen

    2015-07-01

    When facing the "overabundant" of semantic web information, in this paper, the researcher proposes the hierarchical classification and visualizing RIA (Rich Internet Application) navigation system: Concept Map (CM) + Semantic Structure (SS) + the Knowledge on Demand (KOD) service. The aim of the Multimedia processing and empirical applications testing, was to investigating the utility and usability of this visualizing navigation strategy in web communication design, into whether it enables the user to retrieve and construct their personal knowledge or not. Furthermore, based on the segment markets theory in the Marketing model, to propose a User Interface (UI) classification strategy and formulate a set of hypermedia design principles for further UI strategy and e-learning resources in semantic web communication. These research findings: (1) Irrespective of whether the simple declarative knowledge or the complex declarative knowledge model is used, the "CM + SS + KOD navigation system" has a better cognition effect than the "Non CM + SS + KOD navigation system". However, for the" No web design experience user", the navigation system does not have an obvious cognition effect. (2) The essential of classification in semantic web communication design: Different groups of user have a diversity of preference needs and different cognitive styles in the CM + SS + KOD navigation system.

  3. An Indoor Navigation System for the Visually Impaired

    Directory of Open Access Journals (Sweden)

    Luis A. Guerrero

    2012-06-01

    Full Text Available Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user’s trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  4. An indoor navigation system for the visually impaired.

    Science.gov (United States)

    Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  5. Hierarchical HMM based learning of navigation primitives for cooperative robotic endovascular catheterization.

    Science.gov (United States)

    Rafii-Tari, Hedyeh; Liu, Jindong; Payne, Christopher J; Bicknell, Colin; Yang, Guang-Zhong

    2014-01-01

    Despite increased use of remote-controlled steerable catheter navigation systems for endovascular intervention, most current designs are based on master configurations which tend to alter natural operator tool interactions. This introduces problems to both ergonomics and shared human-robot control. This paper proposes a novel cooperative robotic catheterization system based on learning-from-demonstration. By encoding the higher-level structure of a catheterization task as a sequence of primitive motions, we demonstrate how to achieve prospective learning for complex tasks whilst incorporating subject-specific variations. A hierarchical Hidden Markov Model is used to model each movement primitive as well as their sequential relationship. This model is applied to generation of motion sequences, recognition of operator input, and prediction of future movements for the robot. The framework is validated by comparing catheter tip motions against the manual approach, showing significant improvements in the quality of catheterization. The results motivate the design of collaborative robotic systems that are intuitive to use, while reducing the cognitive workload of the operator.

  6. An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation

    Directory of Open Access Journals (Sweden)

    Kun Xie

    2018-03-01

    Full Text Available There are many tasks that require clear and easily recognizable images in the field of underwater robotics and marine science, such as underwater target detection and identification of robot navigation and obstacle avoidance. However, water turbidity makes the underwater image quality too low to recognize. This paper proposes the use of the dark channel prior model for underwater environment recognition, in which underwater reflection models are used to obtain enhanced images. The proposed approach achieves very good performance and multi-scene robustness by combining the dark channel prior model with the underwater diffuse model. The experimental results are given to show the effectiveness of the dark channel prior model in underwater scenarios.

  7. Preliminary study on magnetic tracking-based planar shape sensing and navigation for flexible surgical robots in transoral surgery: methods and phantom experiments.

    Science.gov (United States)

    Song, Shuang; Zhang, Changchun; Liu, Li; Meng, Max Q-H

    2018-02-01

    Flexible surgical robot can work in confined and complex environments, which makes it a good option for minimally invasive surgery. In order to utilize flexible manipulators in complicated and constrained surgical environments, it is of great significance to monitor the position and shape of the curvilinear manipulator in real time during the procedures. In this paper, we propose a magnetic tracking-based planar shape sensing and navigation system for flexible surgical robots in the transoral surgery. The system can provide the real-time tip position and shape information of the robot during the operation. We use wire-driven flexible robot to serve as the manipulator. It has three degrees of freedom. A permanent magnet is mounted at the distal end of the robot. Its magnetic field can be sensed with a magnetic sensor array. Therefore, position and orientation of the tip can be estimated utilizing a tracking method. A shape sensing algorithm is then carried out to estimate the real-time shape based on the tip pose. With the tip pose and shape display in the 3D reconstructed CT model, navigation can be achieved. Using the proposed system, we carried out planar navigation experiments on a skull phantom to touch three different target positions under the navigation of the skull display interface. During the experiments, the real-time shape has been well monitored and distance errors between the robot tip and the targets in the skull have been recorded. The mean navigation error is [Formula: see text] mm, while the maximum error is 3.2 mm. The proposed method provides the advantages that no sensors are needed to mount on the robot and no line-of-sight problem. Experimental results verified the feasibility of the proposed method.

  8. Visualization of Robotic Sensor Data with Augmented Reality

    OpenAIRE

    Thorstensen, Mathias Ciarlo

    2017-01-01

    To understand a robot's intent and behavior, a robot engineer must analyze data at the input and output, but also at all intermediary steps. This might require looking at a specific subset of the system, or a single data node in isolation. A range of different data formats can be used in the systems, and require visualization in different mediums; some are text based, and best visualized in a terminal, while other types must be presented graphically, in 2D or 3D. This often makes understandin...

  9. Visual Odometry for Autonomous Deep-Space Navigation Project

    Science.gov (United States)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.

  10. Visual Measurement of Suture Strain for Robotic Surgery

    Directory of Open Access Journals (Sweden)

    John Martell

    2011-01-01

    Full Text Available Minimally invasive surgical procedures offer advantages of smaller incisions, decreased hospital length of stay, and rapid postoperative recovery to the patient. Surgical robots improve access and visualization intraoperatively and have expanded the indications for minimally invasive procedures. A limitation of the DaVinci surgical robot is a lack of sensory feedback to the operative surgeon. Experienced robotic surgeons use visual interpretation of tissue and suture deformation as a surrogate for tactile feedback. A difficulty encountered during robotic surgery is maintaining adequate suture tension while tying knots or following a running anastomotic suture. Displaying suture strain in real time has potential to decrease the learning curve and improve the performance and safety of robotic surgical procedures. Conventional strain measurement methods involve installation of complex sensors on the robotic instruments. This paper presents a noninvasive video processing-based method to determine strain in surgical sutures. The method accurately calculates strain in suture by processing video from the existing surgical camera, making implementation uncomplicated. The video analysis method was developed and validated using video of suture strain standards on a servohydraulic testing system. The video-based suture strain algorithm is shown capable of measuring suture strains of 0.2% with subpixel resolution and proven reliability under various conditions.

  11. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  12. Robust exponential stabilization of nonholonomic wheeled mobile robots with unknown visual parameters

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The visual servoing stabilization of nonholonomic mobile robot with unknown camera parameters is investigated.A new kind of uncertain chained model of nonholonomic kinemetic system is obtained based on the visual feedback and the standard chained form of type (1,2) mobile robot.Then,a novel time-varying feedback controller is proposed for exponentially stabilizing the position and orientation of the robot using visual feedback and switching strategy when the camera parameters are not known.The exponential s...

  13. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    Science.gov (United States)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  14. Estimation of visual maps with a robot network equipped with vision sensors.

    Science.gov (United States)

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  15. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    Directory of Open Access Journals (Sweden)

    Arturo Gil

    2010-05-01

    Full Text Available In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  16. Middleware Interoperability for Robotics: A ROS-YARP Framework

    Directory of Open Access Journals (Sweden)

    Plinio Moreno

    2016-10-01

    Full Text Available Middlewares are fundamental tools for progress in research and applications in robotics. They enable the integration of multiple heterogeneous sensing and actuation devices, as well as providing general purpose modules for key robotics functions (kinematics, navigation, planning. However, no existing middleware yet provides a complete set of functionalities for all robotics applications, and many robots may need to rely on more than one framework. This paper focuses on the interoperability between two of the most prevalent middleware in robotics: YARP and ROS. Interoperability between middlewares should ideally allow users to execute existing software without the necessity of: (i changing the existing code, and (ii writing hand-coded ``bridges'' for each use-case. We propose a framework enabling the communication between existing YARP modules and ROS nodes for robotics applications in an automated way. Our approach generates the ``bridging gap'' code from a configuration file, connecting YARP ports and ROS topics through code-generated YARP Bottles. %%The configuration file must describe: (i the sender entities, (ii the way to group and convert the information read from the sender, (iii the structure of the output message and (iv the receiving entity. Our choice for the many inputs to one output is the most common use-case in robotics applications, where examples include filtering, decision making and visualization. %We support YARP/ROS and ROS/YARP sender/receiver configurations, which are demonstrated in a humanoid on wheels robot that uses YARP for upper body motor control and visual perception, and ROS for mobile base control and navigation algorithms.

  17. Highly dexterous 2-module soft robot for intra-organ navigation in minimally invasive surgery.

    Science.gov (United States)

    Abidi, Haider; Gerboni, Giada; Brancadoro, Margherita; Fras, Jan; Diodato, Alessandro; Cianchetti, Matteo; Wurdemann, Helge; Althoefer, Kaspar; Menciassi, Arianna

    2018-02-01

    For some surgical interventions, like the Total Mesorectal Excision (TME), traditional laparoscopes lack the flexibility to safely maneuver and reach difficult surgical targets. This paper answers this need through designing, fabricating and modelling a highly dexterous 2-module soft robot for minimally invasive surgery (MIS). A soft robotic approach is proposed that uses flexible fluidic actuators (FFAs) allowing highly dexterous and inherently safe navigation. Dexterity is provided by an optimized design of fluid chambers within the robot modules. Safe physical interaction is ensured by fabricating the entire structure by soft and compliant elastomers, resulting in a squeezable 2-module robot. An inner free lumen/chamber along the central axis serves as a guide of flexible endoscopic tools. A constant curvature based inverse kinematics model is also proposed, providing insight into the robot capabilities. Experimental tests in a surgical scenario using a cadaver model are reported, demonstrating the robot advantages over standard systems in a realistic MIS environment. Simulations and experiments show the efficacy of the proposed soft robot. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Extracting Semantic Information from Visual Data: A Survey

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2016-03-01

    Full Text Available The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods.

  19. Robotic Software Integration Using MARIE

    Directory of Open Access Journals (Sweden)

    Carle Côté

    2006-03-01

    Full Text Available This paper presents MARIE, a middleware framework oriented towards developing and integrating new and existing software for robotic systems. By using a generic communication framework, MARIE aims to create a flexible distributed component system that allows robotics developers to share software programs and algorithms, and design prototypes rapidly based on their own integration needs. The use of MARIE is illustrated with the design of a socially interactive autonomous mobile robot platform capable of map building, localization, navigation, tasks scheduling, sound source localization, tracking and separation, speech recognition and generation, visual tracking, message reading and graphical interaction using a touch screen interface.

  20. Visual map and instruction-based bicycle navigation: a comparison of effects on behaviour.

    Science.gov (United States)

    de Waard, Dick; Westerhuis, Frank; Joling, Danielle; Weiland, Stella; Stadtbäumer, Ronja; Kaltofen, Leonie

    2017-09-01

    Cycling with a classic paper map was compared with navigating with a moving map displayed on a smartphone, and with auditory, and visual turn-by-turn route guidance. Spatial skills were found to be related to navigation performance, however only when navigating from a paper or electronic map, not with turn-by-turn (instruction based) navigation. While navigating, 25% of the time cyclists fixated at the devices that present visual information. Navigating from a paper map required most mental effort and both young and older cyclists preferred electronic over paper map navigation. In particular a turn-by-turn dedicated guidance device was favoured. Visual maps are in particular useful for cyclists with higher spatial skills. Turn-by-turn information is used by all cyclists, and it is useful to make these directions available in all devices. Practitioner Summary: Electronic navigation devices are preferred over a paper map. People with lower spatial skills benefit most from turn-by-turn guidance information, presented either auditory or on a dedicated device. People with higher spatial skills perform well with all devices. It is advised to keep in mind that all users benefit from turn-by-turn information when developing a navigation device for cyclists.

  1. A Study of Visual Descriptors for Outdoor Navigation Using Google Street View Images

    Directory of Open Access Journals (Sweden)

    L. Fernández

    2016-01-01

    Full Text Available A comparative analysis between several methods to describe outdoor panoramic images is presented. The main objective consists in studying the performance of these methods in the localization process of a mobile robot (vehicle in an outdoor environment, when a visual map that contains images acquired from different positions of the environment is available. With this aim, we make use of the database provided by Google Street View, which contains spherical panoramic images captured in urban environments and their GPS position. The main benefit of using these images resides in the fact that it permits testing any novel localization algorithm in countless outdoor environments anywhere in the world and under realistic capture conditions. The main contribution of this work consists in performing a comparative evaluation of different methods to describe images to solve the localization problem in an outdoor dense map using only visual information. We have tested our algorithms using several sets of panoramic images captured in different outdoor environments. The results obtained in the work can be useful to select an appropriate description method for visual navigation tasks in outdoor environments using the Google Street View database and taking into consideration both the accuracy in localization and the computational efficiency of the algorithm.

  2. Development of a self-navigating mobile interior robot application as a security guard/sentry

    International Nuclear Information System (INIS)

    Klarer, P.R.; Harrington, J.J.

    1986-07-01

    This paper describes a mobile robot system designed to function as part of an overall security system at a high security facility. The features of this robot system include specialized software and sensors for navigation without the need for external locator beacons or signposts, sensors for remote imaging and intruder detection, and the ability to communicate information either directly to the electronic portion of the security system or to a manned central control center. Other desirable features of the robot system include low weight, compact size, and low power consumption. The robot system can be operated either by remote manual control, or it can operate autonomously where direct human control can be limited to the global command level. The robot can act as a mobile remote sensing platform for alarm assessment or roving patrol, as a point sensor (sentry) in routine security applications, or as an exploratory device in situations potentially hazardous to humans. This robot system may also be used to ''walk-test'' intrusion detection sensors as part of a routine test and maintenance program for an interior intrusion detection system. The hardware, software, and operation of this robot system will be briefly described herein

  3. A spatial registration method for navigation system combining O-arm with spinal surgery robot

    Science.gov (United States)

    Bai, H.; Song, G. L.; Zhao, Y. W.; Liu, X. Z.; Jiang, Y. X.

    2018-05-01

    The minimally invasive surgery in spinal surgery has become increasingly popular in recent years as it reduces the chances of complications during post-operation. However, the procedure of spinal surgery is complicated and the surgical vision of minimally invasive surgery is limited. In order to increase the quality of percutaneous pedicle screw placement, the O-arm that is a mobile intraoperative imaging system is used to assist surgery. The robot navigation system combined with O-arm is also increasing, with the extensive use of O-arm. One of the major problems in the surgical navigation system is to associate the patient space with the intra-operation image space. This study proposes a spatial registration method of spinal surgical robot navigation system, which uses the O-arm to scan a calibration phantom with metal calibration spheres. First, the metal artifacts were reduced in the CT slices and then the circles in the images based on the moments invariant could be identified. Further, the position of the calibration sphere in the image space was obtained. Moreover, the registration matrix is obtained based on the ICP algorithm. Finally, the position error is calculated to verify the feasibility and accuracy of the registration method.

  4. Topological visual mapping in robotics.

    Science.gov (United States)

    Romero, Anna; Cazorla, Miguel

    2012-08-01

    A key problem in robotics is the construction of a map from its environment. This map could be used in different tasks, like localization, recognition, obstacle avoidance, etc. Besides, the simultaneous location and mapping (SLAM) problem has had a lot of interest in the robotics community. This paper presents a new method for visual mapping, using topological instead of metric information. For that purpose, we propose prior image segmentation into regions in order to group the extracted invariant features in a graph so that each graph defines a single region of the image. Although others methods have been proposed for visual SLAM, our method is complete, in the sense that it makes all the process: it presents a new method for image matching; it defines a way to build the topological map; and it also defines a matching criterion for loop-closing. The matching process will take into account visual features and their structure using the graph transformation matching (GTM) algorithm, which allows us to process the matching and to remove out the outliers. Then, using this image comparison method, we propose an algorithm for constructing topological maps. During the experimentation phase, we will test the robustness of the method and its ability constructing topological maps. We have also introduced new hysteresis behavior in order to solve some problems found building the graph.

  5. Effects of Visual, Auditory, and Tactile Navigation Cues on Navigation Performance, Situation Awareness, and Mental Workload

    National Research Council Canada - National Science Library

    Davis, Bradley M

    2007-01-01

    .... Results from both experiments indicate that augmented visual displays reduced time to complete navigation, maintained situation awareness, and drastically reduced mental workload in comparison...

  6. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  7. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  8. Environmental mobile robot based on artificial intelligence and visual perception for weed elimination

    Directory of Open Access Journals (Sweden)

    Nabeel Kadim Abid AL-SAHIB

    2012-12-01

    Full Text Available This research presents a new editing design for the pioneer p3-dx mobile robot by adding a mechanical gripper for eliminating the weed and a digital camera for capturing the image of the field. Also, a wireless kit that makes control on the motor's gripper is envisaged. This work consists of two parts. The theoretical part contains a program to read the image and discover the weed coordinates which will be sent to the path planning software to discover the locations of weed, green plant and sick plant. These positions are sent then to the mobile robot navigation software. Then the wireless signal is sent to the gripper. The experimental part is represented as a digital camera that takes an image of the agricultural field, and then sends it to the computer for processing. After that the weeds coordinates are sent to the mobile robots by mobile robot navigation software. Next, the wireless signal is sent to the wireless kit controlling the motor gripper by the computer interface program; the first trial on the agricultural field shows that the mobile robot can discriminate the green plant, from weed and sick plant and can take the right decision with respect to treatment or elimination. The experimental work shows that the environmental mobile robot can detect successfully the weed, sick plant and the hale plant. The mobile robot also travels from base to the target point represented by the weed and sick plants in the optimum path. The experimental work also shows that the environmental mobile robot can eliminate the weed and cure the sick plants in a correctly way.

  9. Visual exploration and analysis of human-robot interaction rules

    Science.gov (United States)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming

  10. A Coordinated Control Architecture for Disaster Response Robots

    Science.gov (United States)

    2016-01-01

    to use these same algorithms to provide navigation Odometry for the vehicle motions when the robot is driving. Visual Odometry The YouTube link... depressed the accelerator pedal. We relied on the fact that the vehicle quickly comes to rest when the accelerator pedal is not being pressed. The

  11. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    Science.gov (United States)

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate

  12. Deliverable D.8.4. Social data visualization and navigation services -3rd Year Update-

    NARCIS (Netherlands)

    Bitter-Rijpkema, Marlies; Brouns, Francis; Drachsler, Hendrik; Fazeli, Soude; Sanchez-Alonso, Salvador; Rajabi, Enayat; Kolovou, Lamprini

    2015-01-01

    Within the Open Discovery Space our study (T.8.4) focused on ”Enhanced Social Data Visualization & Navigation Services. This deliverable provides the prototype report regarding the deployment of adapted visualization and navigation services to be integrated in the ODS Social Data Management Layer.

  13. Street navigation using visual information on mobile phones

    DEFF Research Database (Denmark)

    Nguyen, Phuong Giang; Andersen, Hans Jørgen; Høilund, Carsten

    2010-01-01

    Applications with street navigation have been recently introduced on mobile phone devices. A major part of existing systems use integrated GPS as input for indicating the location. However, these systems often fail or make abrupt shifts in urban environment due to occlusion of satellites....... Furthermore, they only give the position of a person and not the object of his attention, which is just as important for localization based services. In this paper we introduce a system using mobile phones built-in cameras for navigation and localization using visual information in accordance with the way we...

  14. Mapping of unknown industrial plant using ROS-based navigation mobile robot

    Science.gov (United States)

    Priyandoko, G.; Ming, T. Y.; Achmad, M. S. H.

    2017-10-01

    This research examines how humans work with teleoperated unmanned mobile robot inspection in industrial plant area resulting 2D/3D map for further critical evaluation. This experiment focuses on two parts, the way human-robot doing remote interactions using robust method and the way robot perceives the environment surround as a 2D/3D perspective map. ROS (robot operating system) as a tool was utilized in the development and implementation during the research which comes up with robust data communication method in the form of messages and topics. RGBD SLAM performs the visual mapping function to construct 2D/3D map using Kinect sensor. The results showed that the mobile robot-based teleoperated system are successful to extend human perspective in term of remote surveillance in large area of industrial plant. It was concluded that the proposed work is robust solution for large mapping within an unknown construction building.

  15. UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.

    Science.gov (United States)

    Chen, Jessie Y C

    2010-08-01

    A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.

  16. A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2009-09-01

    Full Text Available This paper presents a novel two-step autonomous navigation method for search and rescue robot. The algorithm based on the vision is proposed for terrain identification to give a prediction of the safest path with the support vector regression machine (SVRM trained off-line with the texture feature and color features. And correction algorithm of the prediction based the vibration information is developed during the robot traveling, using the judgment function given in the paper. The region with fault prediction will be corrected with the real traversability value and be used to update the SVRM. The experiment demonstrates that this method could help the robot to find the optimal path and be protected from the trap brought from the error between prediction and the real environment.

  17. Dissociable cerebellar activity during spatial navigation and visual memory in bilateral vestibular failure.

    Science.gov (United States)

    Jandl, N M; Sprenger, A; Wojak, J F; Göttlich, M; Münte, T F; Krämer, U M; Helmchen, C

    2015-10-01

    Spatial orientation and navigation depends on information from the vestibular system. Previous work suggested impaired spatial navigation in patients with bilateral vestibular failure (BVF). The aim of this study was to investigate event-related brain activity by functional magnetic resonance imaging (fMRI) during spatial navigation and visual memory tasks in BVF patients. Twenty-three BVF patients and healthy age- and gender matched control subjects performed learning sessions of spatial navigation by watching short films taking them through various streets from a driver's perspective along a route to the Cathedral of Cologne using virtual reality videos (adopted and modified from Google Earth). In the scanner, participants were asked to respond to questions testing for visual memory or spatial navigation while they viewed short video clips. From a similar but not identical perspective depicted video frames of routes were displayed which they had previously seen or which were completely novel to them. Compared with controls, posterior cerebellar activity in BVF patients was higher during spatial navigation than during visual memory tasks, in the absence of performance differences. This cerebellar activity correlated with disease duration. Cerebellar activity during spatial navigation in BVF patients may reflect increased non-vestibular efforts to counteract the development of spatial navigation deficits in BVF. Conceivably, cerebellar activity indicates a change in navigational strategy of BVF patients, i.e. from a more allocentric, landmark or place-based strategy (hippocampus) to a more sequence-based strategy. This interpretation would be in accord with recent evidence for a cerebellar role in sequence-based navigation. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Hand Motion-Based Remote Control Interface with Vibrotactile Feedback for Home Robots

    Directory of Open Access Journals (Sweden)

    Juan Wu

    2013-06-01

    Full Text Available This paper presents the design and implementation of a hand-held interface system for the locomotion control of home robots. A handheld controller is proposed to implement hand motion recognition and hand motion-based robot control. The handheld controller can provide a ‘connect-and-play’ service for the users to control the home robot with visual and vibrotactile feedback. Six natural hand gestures are defined for navigating the home robots. A three-axis accelerometer is used to detect the hand motions of the user. The recorded acceleration data are analysed and classified to corresponding control commands according to their characteristic curves. A vibration motor is used to provide vibrotactile feedback to the user when an improper operation is performed. The performances of the proposed hand motion-based interface and the traditional keyboard and mouse interface have been compared in robot navigation experiments. The experimental results of home robot navigation show that the success rate of the handheld controller is 13.33% higher than the PC based controller. The precision of the handheld controller is 15.4% more than that of the PC and the execution time is 24.7% less than the PC based controller. This means that the proposed hand motion-based interface is more efficient and flexible.

  19. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  20. GPS/MEMS IMU/Microprocessor Board for Navigation

    Science.gov (United States)

    Gender, Thomas K.; Chow, James; Ott, William E.

    2009-01-01

    A miniaturized instrumentation package comprising a (1) Global Positioning System (GPS) receiver, (2) an inertial measurement unit (IMU) consisting largely of surface-micromachined sensors of the microelectromechanical systems (MEMS) type, and (3) a microprocessor, all residing on a single circuit board, is part of the navigation system of a compact robotic spacecraft intended to be released from a larger spacecraft [e.g., the International Space Station (ISS)] for exterior visual inspection of the larger spacecraft. Variants of the package may also be useful in terrestrial collision-detection and -avoidance applications. The navigation solution obtained by integrating the IMU outputs is fed back to a correlator in the GPS receiver to aid in tracking GPS signals. The raw GPS and IMU data are blended in a Kalman filter to obtain an optimal navigation solution, which can be supplemented by range and velocity data obtained by use of (l) a stereoscopic pair of electronic cameras aboard the robotic spacecraft and/or (2) a laser dynamic range imager aboard the ISS. The novelty of the package lies mostly in those aspects of the design of the MEMS IMU that pertain to controlling mechanical resonances and stabilizing scale factors and biases.

  1. Blind's Eye: Employing Google Directions API for Outdoor Navigation of Visually Impaired Pedestrians

    Directory of Open Access Journals (Sweden)

    SABA FEROZMEMON

    2017-07-01

    Full Text Available Vision plays a paramount role in our everyday life and assists human in almost every walk of life. The people lacking vision sense require assistance to move freely. The inability of unassisted navigation and orientation in outdoor environments is one of the most important constraints for people with visual impairment. Motivated by this problem, we developed a simplified and user friendly navigation system that allows visually impaired pedestrians to reach their desired outdoor location. We designed a Braille keyboard to allow the blind user to input their destination. The proposed system makes use of Google Directions API (Application Program Interface to generate the right path to a destination. The visually impaired pedestrians have to wear a vibration belt to keep them on the track. The evaluation exposes shortcomings of Google Directions API when used for navigating the visually impaired pedestrians in an outdoor environment.

  2. Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments

    Science.gov (United States)

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…

  3. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    Science.gov (United States)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the

  4. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  5. Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty

    Science.gov (United States)

    Armah, Stephen Kofi

    Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized

  6. Attention-based navigation in mobile robots using a reconfigurable sensor

    NARCIS (Netherlands)

    Maris, M.

    2001-01-01

    In this paper, a method for visual attentional selection in mobile robots is proposed, based on amplification of the selected stimulus. Attention processing is performed on the vision sensor, which is integrated on a silicon chip and consists of a contrast sensitive retina with the ability to change

  7. From Self-Assessment to Frustration, A Small Step Towards Autonomy in Robotic Navigation.

    Directory of Open Access Journals (Sweden)

    Adrien eJauffret

    2013-10-01

    Full Text Available Autonomy and self-improvement capabilities are still challenging in the fields of robotics and machine learning. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a repertoire of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behavior’s quality, from a given fitness system in order to take correct decisions.In this work, we focus on how a second-order controller can be used to (1 manage behaviors according to the situation and (2 seek for human interactions to improve skills. Following an incremental and constructivist approach, we present a generic neural architecture, based on an online novelty detection algorithm that may be able to self-evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the previous perception. Prediction error, coming from surprising events, provides a measure of the quality of the underlying sensory-motor contingencies. We show how a simple second-order controller (emotional system based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and also succeeds in asking for help if it detects dead-lock situations.We propose that this model could be a key structure toward self-assessment and autonomy. We made several experiments that can account for such properties for two different strategies (road following and place cells based navigation in different situations.

  8. Indoor navigation by people with visual impairment using a digital sign system.

    Directory of Open Access Journals (Sweden)

    Gordon E Legge

    Full Text Available There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects-blind, low vision, blindfolded sighted, and normally sighted controls-were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment.

  9. Indoor navigation by people with visual impairment using a digital sign system.

    Science.gov (United States)

    Legge, Gordon E; Beckmann, Paul J; Tjan, Bosco S; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan

    2013-01-01

    There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects-blind, low vision, blindfolded sighted, and normally sighted controls-were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment.

  10. Nonparametric Online Learning Control for Soft Continuum Robot: An Enabling Technique for Effective Endoscopic Navigation

    Science.gov (United States)

    Lee, Kit-Hang; Fu, Denny K.C.; Leong, Martin C.W.; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong

    2017-01-01

    Abstract Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments. PMID:29251567

  11. LOD map--A visual interface for navigating multiresolution volume visualization.

    Science.gov (United States)

    Wang, Chaoli; Shen, Han-Wei

    2006-01-01

    In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.

  12. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. 3D Mesh Compression and Transmission for Mobile Robotic Applications

    Directory of Open Access Journals (Sweden)

    Bailin Yang

    2016-01-01

    Full Text Available Mobile robots are useful for environment exploration and rescue operations. In such applications, it is crucial to accurately analyse and represent an environment, providing appropriate inputs for motion planning in order to support robot navigation and operations. 2D mapping methods are simple but cannot handle multilevel or multistory environments. To address this problem, 3D mapping methods generate structural 3D representations of the robot operating environment and its objects by 3D mesh reconstruction. However, they face the challenge of efficiently transmitting those 3D representations to system modules for 3D mapping, motion planning, and robot operation visualization. This paper proposes a quality-driven mesh compression and transmission method to address this. Our method is efficient, as it compresses a mesh by quantizing its transformed vertices without the need to spend time constructing an a-priori structure over the mesh. A visual distortion function is developed to govern the level of quantization, allowing mesh transmission to be controlled under different network conditions or time constraints. Our experiments demonstrate how the visual quality of a mesh can be manipulated by the visual distortion function.

  14. Visual SLAM and Moving-object Detection for a Small-size Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Yin-Tien Wang

    2010-09-01

    Full Text Available In the paper, a novel moving object detection (MOD algorithm is developed and integrated with robot visual Simultaneous Localization and Mapping (vSLAM. The moving object is assumed to be a rigid body and its coordinate system in space is represented by a position vector and a rotation matrix. The MOD algorithm is composed of detection of image features, initialization of image features, and calculation of object coordinates. Experimentation is implemented on a small-size humanoid robot and the results show that the performance of the proposed algorithm is efficient for robot visual SLAM and moving object detection.

  15. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  16. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    International Nuclear Information System (INIS)

    Dixon, Warren

    2002-01-01

    The objective of this project is to enable current and future EM robots with an increased ability to perceive and interact with unstructured and unknown environments through the use of camera-based visual servo controlled robots. The scientific goals of this research are to develop a new visual servo control methodology that: (1) adapts for the unknown camera calibration parameters (e.g., focal length, scaling factors, camera position and orientation) and the physical parameters of the robotic system (e.g., mass, inertia, friction), (2) compensates for unknown depth information (extract 3D information from the 2D image), and (3) enables multi-uncalibrated cameras to be used as a means to provide a larger field-of-view. Nonlinear Lyapunov-based techniques are being used to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current visual servo controlled robotic systems. The potential relevance of this control methodology will be a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature extraction and recognition, to enable current EM robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). These capabilities will enable EM robots to significantly accelerate D and D operations by providing for improved robot autonomy and increased worker productivity, while also reducing the associated costs, removing the human operator from the hazardous environments, and reducing the burden and skill of the human operators

  17. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    Science.gov (United States)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  18. Sensor guided control and navigation with intelligent machines. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Bijoy K.

    2001-03-26

    This item constitutes the final report on ''Visionics: An integrated approach to analysis and design of intelligent machines.'' The report discusses dynamical systems approach to problems in robust control of possibly time-varying linear systems, problems in vision and visually guided control, and, finally, applications of these control techniques to intelligent navigation with a mobile platform. Robust design of a controller for a time-varying system essentially deals with the problem of synthesizing a controller that can adapt to sudden changes in the parameters of the plant and can maintain stability. The approach presented is to design a compensator that simultaneously stabilizes each and every possible mode of the plant as the parameters undergo sudden and unexpected changes. Such changes can in fact be detected by a visual sensor and, hence, visually guided control problems are studied as a natural consequence. The problem here is to detect parameters of the plant and maintain st ability in the closed loop using a ccd camera as a sensor. The main result discussed in the report is the role of perspective systems theory that was developed in order to analyze such a detection and control problem. The robust control algorithms and the visually guided control algorithms are applied in the context of a PUMA 560 robot arm control where the goal is to visually locate a moving part on a mobile turntable. Such problems are of paramount importance in manufacturing with a certain lack of structure. Sensor guided control problems are extended to problems in robot navigation using a NOMADIC mobile platform with a ccd and a laser range finder as sensors. The localization and map building problems are studied with the objective of navigation in an unstructured terrain.

  19. Towards a Sign-Based Indoor Navigation System for People with Visual Impairments.

    Science.gov (United States)

    Rituerto, Alejandro; Fusco, Giovanni; Coughlan, James M

    2016-10-01

    Navigation is a challenging task for many travelers with visual impairments. While a variety of GPS-enabled tools can provide wayfinding assistance in outdoor settings, GPS provides no useful localization information indoors. A variety of indoor navigation tools are being developed, but most of them require potentially costly physical infrastructure to be installed and maintained, or else the creation of detailed visual models of the environment. We report development of a new smartphone-based navigation aid, which combines inertial sensing, computer vision and floor plan information to estimate the user's location with no additional physical infrastructure and requiring only the locations of signs relative to the floor plan. A formative study was conducted with three blind volunteer participants demonstrating the feasibility of the approach and highlighting the areas needing improvement.

  20. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  1. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging - techniques and applications.

    Science.gov (United States)

    Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V

    2014-09-01

    Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Adding memory processing behaviors to the fuzzy behaviorist-based navigation of mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Pin, F.G.; Bender, S.R.

    1996-05-01

    Most fuzzy logic-based reasoning schemes developed for robot control are fully reactive, i.e., the reasoning modules consist of fuzzy rule bases that represent direct mappings from the stimuli provided by the perception systems to the responses implemented by the motion controllers. Due to their totally reactive nature, such reasoning systems can encounter problems such as infinite loops and limit cycles. In this paper, we proposed an approach to remedy these problems by adding a memory and memory-related behaviors to basic reactive systems. Three major types of memory behaviors are addressed: memory creation, memory management, and memory utilization. These are first presented, and examples of their implementation for the recognition of limit cycles during the navigation of an autonomous robot in a priori unknown environments are then discussed.

  3. Mobile Robot Positioning by using Low-Cost Visual Tracking System

    Directory of Open Access Journals (Sweden)

    Ruangpayoongsak Niramon

    2017-01-01

    Full Text Available This paper presents an application of visual tracking system on mobile robot positioning. The proposed method is verified on a constructed low-cost tracking system consisting of 2 DOF pan-tilt unit, web camera and distance sensor. The motion of pan-tilt joints is realized and controlled by using LQR controller running on microcontroller. Without needs of camera calibration, robot trajectory is tracked by Kalman filter integrating distance information and joint positions. The experimental results demonstrate validity of the proposed positioning technique and the obtained mobile robot trajectory is benchmarked against laser rangefinder positioning. The implemented system can successfully track a mobile robot driving at 14 cm/s.

  4. Iconic memory-based omnidirectional route panorama navigation.

    Science.gov (United States)

    Yagi, Yasushi; Imai, Kousuke; Tsuji, Kentaro; Yachida, Masahiko

    2005-01-01

    A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.

  5. Distributed consensus with visual perception in multi-robot systems

    CERN Document Server

    Montijano, Eduardo

    2015-01-01

    This monograph introduces novel responses to the different problems that arise when multiple robots need to execute a task in cooperation, each robot in the team having a monocular camera as its primary input sensor. Its central proposition is that a consistent perception of the world is crucial for the good development of any multi-robot application. The text focuses on the high-level problem of cooperative perception by a multi-robot system: the idea that, depending on what each robot sees and its current situation, it will need to communicate these things to its fellows whenever possible to share what it has found and keep updated by them in its turn. However, in any realistic scenario, distributed solutions to this problem are not trivial and need to be addressed from as many angles as possible. Distributed Consensus with Visual Perception in Multi-Robot Systems covers a variety of related topics such as: ·         distributed consensus algorithms; ·         data association and robustne...

  6. An optimized field coverage planning approach for navigation of agricultural robots in fields involving obstacle areas

    DEFF Research Database (Denmark)

    Hameed, Ibahim; Bochtis, D.; Sørensen, C.A.

    2013-01-01

    -field obstacle areas, the headland paths generation for the field and each obstacle area, the implementation of a genetic algorithm to optimize the sequence that the field robot vehicle will follow to visit the blocks, and an algorithmically generation of the task sequences derived from the farmer practices......Technological advances combined with the demand of cost efficiency and environmental considerations lead farmers to review their practices towards the adoption of new managerial approaches including enhanced automation. The application of field robots is one of the most promising advances among....... This approach has proven that it is possible to capture the practices of farmers and embed these practices in an algorithmic description providing a complete field area coverage plan in a form prepared for execution by the navigation system of a field robot....

  7. Adding navigation, artificial audition and vital sign monitoring capabilities to a telepresence mobile robot for remote home care applications.

    Science.gov (United States)

    Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois

    2017-07-01

    A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.

  8. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    International Nuclear Information System (INIS)

    Dixon, Warren

    2003-01-01

    The objective of this project is to enable current and future EM robots with an increased ability to perceive and interact with unstructured and unknown environments through the use of camera-based visual servo controllers. The scientific goals of this research are to develop a new visual servo control methodology that: (1) adapts for the unknown camera calibration parameters (e.g., focal length, scaling factors, camera position, and orientation) and the physical parameters of the robotic system (e.g., mass, inertia, friction), (2) compensates for unknown depth information (extract 3D information from the 2D image), and (3) enables multi-uncalibrated cameras to be used as a means to provide a larger field-of-view. Nonlinear Lyapunov-based techniques in conjunction with results from projective geometry are being used to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current visual servo controlled robotic systems. The potential relevance of this control methodology will be a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature extraction and recognition, to enable current EM robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). These capabilities will enable EM robots to significantly accelerate D and D operations by providing for improved robot autonomy and increased worker productivity, while also reducing the associated costs, removing the human operator from the hazardous environments, and reducing the burden and skill of the human operators

  9. Proxemics models for human-aware navigation in robotics: Grounding interaction and personal space models in experimental data from psychology

    OpenAIRE

    Barnaud , Marie-Lou; Morgado , Nicolas; Palluel-Germain , Richard; Diard , Julien; Spalanzani , Anne

    2014-01-01

    International audience; In order to navigate in a social environment, a robot must be aware of social spaces, which include proximity and interaction-based constraints. Previous models of interaction and personal spaces have been inspired by studies in social psychology but not systematically grounded and validated with respect to experimental data. We propose to implement personal and interaction space models in order to replicate a classical psychology experiment. Our robotic simulations ca...

  10. Design and implementation of an interface supporting information navigation tasks using hyperbolic visualization technique

    International Nuclear Information System (INIS)

    Lee, J. K.; Choi, I. K.; Jun, S. H.; Park, K. O.; Seo, Y. S.; Seo, S. M.; Koo, I. S.; Jang, M. H.

    2001-01-01

    Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchially structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance, in this thesis, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks

  11. Optical angular constancy is maintained as a navigational control strategy when pursuing robots moving along complex pathways.

    Science.gov (United States)

    Wang, Wei; McBeath, Michael K; Sugar, Thomas G

    2015-03-24

    The optical navigational control strategy used to intercept moving targets was explored using a real-world object that travels along complex, evasive pathways. Fielders ran across a gymnasium attempting to catch a moving robot that varied in speed and direction, while ongoing position was measured using an infrared motion-capture system. Fielder running paths were compared with the predictions of three lateral control models, each based on maintaining a particular optical angle relative to the robotic target: (a) constant alignment angle (CAA), (b) constant eccentricity angle (CEA), and (c) linear optical trajectory (LOT). Findings reveal that running pathways were most consistent with maintenance of LOT and least consistent with CEA. This supports that fielders use the same optical control strategy of maintaining angular constancy using a LOT when navigating toward targets moving along complex pathways as when intercepting simple ballistic trajectories. In those cases in which a target dramatically deviates from its optical path, fielders appear to simply reset LOT parameters using a new constant angle value. Maintenance of such optical angular constancy has now been shown to work well with ballistic, complex, and evasive moving targets, confirming the LOT strategy as a robust, general-purpose optical control mechanism for navigating to intercept catchable targets, both airborne and ground based. © 2015 ARVO.

  12. Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment

    Directory of Open Access Journals (Sweden)

    Ramiro Velázquez

    2015-01-01

    Full Text Available Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments.

  13. OPTIMAL TOUR CONSTRUCTIONS FOR MULTIPLE MOBILE ROBOTS

    Directory of Open Access Journals (Sweden)

    AMIR A. SHAFIE

    2011-04-01

    Full Text Available The attempts to use mobile robots in a variety of environments are currently being limited by their navigational capability, thus a set of robots must be configured for one specific environment. The problem of navigating an environment is the fundamental problem in mobile robotic where various methods including exact and heuristic approaches have been proposed to solve the problem. This paper proposed a solution to the navigation problem via the use of multiple robots to explore the environment employing heuristic methods to navigate the environment using a variant of a Traveling Salesman Problem (TSP known as Multiple Traveling Salesman Problem (M-TSP.

  14. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    International Nuclear Information System (INIS)

    Dixon, Warren

    2004-01-01

    There is significant motivation to provide robotic systems with improved autonomy as a means to significantly accelerate deactivation and decommissioning (DandD) operations while also reducing the associated costs, removing human operators from hazardous environments, and reducing the required burden and skill of human operators. To achieve improved autonomy, this project focused on the basic science challenges leading to the development of visual servo controllers. The challenge in developing these controllers is that a camera provides 2-dimensional image information about the 3-dimensional Euclidean-space through a perspective (range dependent) projection that can be corrupted by uncertainty in the camera calibration matrix and by disturbances such as nonlinear radial distortion. Disturbances in this relationship (i.e., corruption in the sensor information) propagate erroneous information to the feedback controller of the robot, leading to potentially unpredictable task execution. This research project focused on the development of a visual servo control methodology that targets compensating for disturbances in the camera model (i.e., camera calibration and the recovery of range information) as a means to achieve predictable response by the robotic system operating in unstructured environments. The fundamental idea is to use nonlinear Lyapunov-based techniques along with photogrammetry methods to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current robotic applications. The outcome of this control methodology is a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature recognition and extraction to enable robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). The developed methodology has been reported in numerous peer-reviewed publications and the

  15. Adaptive Visual Face Tracking for an Autonomous Robot

    NARCIS (Netherlands)

    van Hoof, Herke; van der Zant, Tijn; Wiering, Marco

    2011-01-01

    Perception is an essential ability for autonomous robots in non-standardized conditions. However, the appearance of objects can change between different conditions. A system visually tracking a target based on its appearance could lose its target in those cases. A tracker learning the appearance of

  16. Visual servo control for a human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-03-01

    Full Text Available This thesis presents work completed on the design of control and vision components for use in a monocular vision-based human-following robot. The use of vision in a controller feedback loop is referred to as vision-based or visual servo control...

  17. Visual Control of Robots Using Range Images

    Directory of Open Access Journals (Sweden)

    Fernando Torres

    2010-08-01

    Full Text Available In the last years, 3D-vision systems based on the time-of-flight (ToF principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  18. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    Science.gov (United States)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  19. Upload of Dead Reckoning Measurements for Improved Navigational Efficiency on Embedded Robotics

    Energy Technology Data Exchange (ETDEWEB)

    Tickle, Andrew J; Harvey, Paul K, E-mail: prouction_leader@hotmail.com [School of Electrical Engineering, Electronics and Computer Science, University of Liverpool, Liverpool L69 3GJ (United Kingdom)

    2011-08-17

    The process behind Dead Reckoning (DR) is simple in that a robot can know its current location via a record of its starting position, direction and speed without the need to look for landmarks or follow lines. This process allows a robot to drive around a known environment such as indoors and heavy urban areas where traditional GPS navigation would not be an option. Discussed in this paper is an improvement of a previously designed DR mechanism in DSP Builder where now the user enters the DR measurements and commands as a sequence via a keypad. This replaces the need for user to programme the details into the system by altering numerous value tags within the design one-by-one, thus making it more user-independent and easier to alter for different environments. The paper shows updated simulations for repeatability, how the keypad links to the system and where this work will lead.

  20. Software Strategy for Robotic Transperineal Prostate Therapy in Closed-Bore MRI

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S.; Csoma, Csaba; DiMaio, Simon P.; Gobbi, David G.; Fichtinger, Gabor; Tempany, Clare M.; Hata, Nobuhiko

    2009-01-01

    A software strategy to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and open-source navigation software are connected to one another via Ethernet to exchange commands, coordinates, and images. Six states of the system called “workphases” are defined based on the clinical scenario to synchronize behaviors of all components. The wizard-style user interface allows easy following of the clinical workflow. On top of this framework, the software provides features for intuitive needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MRI. These features are supported by calibration of robot and image coordinates by the fiducial-based registration. The performance test shows that the registration error of the system was 2.6 mm in the prostate area, and it displayed real-time 2D image 1.7 s after the completion of image acquisition. PMID:18982666

  1. Mobile Robots in Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    intelligent mobile robotic devices capable of being a more natural and sociable actor in a human environment. More specific the emphasis is on safe and natural motion and navigation issues. First part of the work focus on developing a robotic system, which estimates human interest in interacting......, lawn mowers, toy pets, or as assisting technologies for care giving. If we want robots to be an even larger and more integrated part of our every- day environments, they need to become more intelligent, and behave safe and natural to the humans in the environment. This thesis deals with making...... as being able to navigate safely around one person, the robots must also be able to navigate in environments with more people. This can be environments such as pedestrian streets, hospital corridors, train stations or airports. The developed human-aware navigation strategy is enhanced to formulate...

  2. Control Servo-Visual de un Robot Manipulador Planar Basado en Pasividad

    Directory of Open Access Journals (Sweden)

    Carlos Soria

    2008-10-01

    Full Text Available Resumen: En este trabajo se diseña un controlador servo visual basado en la propiedad de pasividad del sistema visual. Se propone un regulador con ganancias de control variables, de tal manera que se evita la saturación de los actuadores y al mismo tiempo presenta la capacidad de corregir errores de pequeña magnitud. Asimismo el diseno se hace tenieñdo en cuenta el desempeño L2, a fin de darle capacidad de seguimiento de objetos en movimiento, con un error de control pequeño. Se muestran resultados experimentales realizados en un robot manipulador industrial tipo planar para verificar el cumplimiento de los objetivos del controlador propuesto. Palabras Clave: robot manipulador industrial, control servo visual, control no lineal, pasividad

  3. New Control Paradigms for Resources Saving: An Approach for Mobile Robots Navigation.

    Science.gov (United States)

    Socas, Rafael; Dormido, Raquel; Dormido, Sebastián

    2018-01-18

    In this work, an event-based control scheme is presented. The proposed system has been developed to solve control problems appearing in the field of Networked Control Systems (NCS). Several models and methodologies have been proposed to measure different resources consumptions. The use of bandwidth, computational load and energy resources have been investigated. This analysis shows how the parameters of the system impacts on the resources efficiency. Moreover, the proposed system has been compared with its equivalent discrete-time solution. In the experiments, an application of NCS for mobile robots navigation has been set up and its resource usage efficiency has been analysed.

  4. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    Science.gov (United States)

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  5. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    Directory of Open Access Journals (Sweden)

    Andrew J Kolarik

    Full Text Available Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation and tactile (using a sensory substitution device, SSD guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  6. Determining navigability of terrain using point cloud data.

    Science.gov (United States)

    Cockrell, Stephanie; Lee, Gregory; Newman, Wyatt

    2013-06-01

    This paper presents an algorithm to identify features of the navigation surface in front of a wheeled robot. Recent advances in mobile robotics have brought about the development of smart wheelchairs to assist disabled people, allowing them to be more independent. These robots have a human occupant and operate in real environments where they must be able to detect hazards like holes, stairs, or obstacles. Furthermore, to ensure safe navigation, wheelchairs often need to locate and navigate on ramps. The algorithm is implemented on data from a Kinect and can effectively identify these features, increasing occupant safety and allowing for a smoother ride.

  7. Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Andersen, Jens Christian

    2007-01-01

    the current position to a desired destination. This thesis presents and experimentally validates solutions for road classification, obstacle avoidance and mission execution. The road classification is based on laser scanner measurements and supported at longer ranges by vision. The road classification...... is sufficiently sensitive to separate the road from flat roadsides, and to distinguish asphalt roads from gravelled roads. The vision-based road detection uses a combination of chromaticity and edge detection to outline the traversable part of the road based on a laser scanner classified sample area....... The perception of these two sensors are utilised by a path planner to allow a number of drive modes, and especially the ability to follow road edges are investigated. The navigation mission is controlled by a script language. The navigation script controls route sequencing, junction detection, junction crossing...

  8. Cooperative Rendezvous and Docking for Underwater Robots Using Model Predictive Control and Dual Decomposition

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Johansen, Tor Arne; Blanke, Mogens

    2018-01-01

    This paper considers the problem of rendezvous and docking with visual constraints in the context of underwater robots with camera-based navigation. The objective is the convergence of the vehicles to a common point while maintaining visual contact. The proposed solution includes the design of a ...... of a distributed model predictive controller based on dual decomposition, which allows for optimization in a decentralized fashion. The proposed distributed controller enables rendezvous and docking between vehicles while maintaining visual contact....

  9. An intelligent inspection and survey robot. Volume 1

    International Nuclear Information System (INIS)

    1995-01-01

    ARIES number-sign 1 (Autonomous Robotic Inspection Experimental System), has been developed for the Department of Energy to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. The drums are typically stacked four high and arranged in rows with three-foot aisle widths. The robot will navigate through the aisles and perform an inspection operation, typically performed by a human operator, making decisions about the condition of the drums and maintaining a database of pertinent information about each drum. A new version of the Cybermotion series of mobile robots is the base mobile vehicle for ARIES. The new Model K3A consists of an improved and enhanced mobile platform and a new turret that will permit turning around in a three-foot aisle. Advanced sonar and lidar systems were added to improve navigation in the narrow drum aisles. Onboard computer enhancements include a VMEbus computer system running the VxWorks real-time operating system. A graphical offboard supervisory UNIX workstation is used for high-level planning, control, monitoring, and reporting. A camera positioning system (CPS) includes primitive instructions for the robot to use in referencing and positioning the payload. The CPS retracts to a more compact position when traveling in the open warehouse. During inspection, the CPS extends up to deploy inspection packages at different heights on the four-drum stacks of 55-, 85-, and 110-gallon drums. The vision inspection module performs a visual inspection of the waste drums. This system will locate and identify each drum, locate any unique visual features, characterize relevant surface features of interest and update a data-base containing the inspection data

  10. An intelligent inspection and survey robot. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-15

    ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System), has been developed for the Department of Energy to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. The drums are typically stacked four high and arranged in rows with three-foot aisle widths. The robot will navigate through the aisles and perform an inspection operation, typically performed by a human operator, making decisions about the condition of the drums and maintaining a database of pertinent information about each drum. A new version of the Cybermotion series of mobile robots is the base mobile vehicle for ARIES. The new Model K3A consists of an improved and enhanced mobile platform and a new turret that will permit turning around in a three-foot aisle. Advanced sonar and lidar systems were added to improve navigation in the narrow drum aisles. Onboard computer enhancements include a VMEbus computer system running the VxWorks real-time operating system. A graphical offboard supervisory UNIX workstation is used for high-level planning, control, monitoring, and reporting. A camera positioning system (CPS) includes primitive instructions for the robot to use in referencing and positioning the payload. The CPS retracts to a more compact position when traveling in the open warehouse. During inspection, the CPS extends up to deploy inspection packages at different heights on the four-drum stacks of 55-, 85-, and 110-gallon drums. The vision inspection module performs a visual inspection of the waste drums. This system will locate and identify each drum, locate any unique visual features, characterize relevant surface features of interest and update a data-base containing the inspection data.

  11. The visual neuroscience of robotic grasping achieving sensorimotor skills through dorsal-ventral stream integration

    CERN Document Server

    Chinellato, Eris

    2016-01-01

    This book presents interdisciplinary research that pursues the mutual enrichment of neuroscience and robotics. Building on experimental work, and on the wealth of literature regarding the two cortical pathways of visual processing - the dorsal and ventral streams - we define and implement, computationally and on a real robot, a functional model of the brain areas involved in vision-based grasping actions. Grasping in robotics is largely an unsolved problem, and we show how the bio-inspired approach is successful in dealing with some fundamental issues of the task. Our robotic system can safely perform grasping actions on different unmodeled objects, denoting especially reliable visual and visuomotor skills. The computational model and the robotic experiments help in validating theories on the mechanisms employed by the brain areas more directly involved in grasping actions. This book offers new insights and research hypotheses regarding such mechanisms, especially for what concerns the interaction between the...

  12. The Microsoft Visual Studio Software Development For 5 DOF Nuclear Malaysia Robot Arm V2 Control System

    International Nuclear Information System (INIS)

    Mohd Zaid Hassan; Anwar Abdul Rahman; Azraf Azman; Mohd Rizal Mamat; Mohd Arif Hamzah

    2014-01-01

    This paper presents the Microsoft visual studio development for 5DOF Nuclear Malaysia Robot Arm V2 control system. The kinematics analysis is the study of the relationship between the individual joints of robot manipulator, the position and orientation of the end-effector. The Denavit-Hartenberg (DH) model is used to model the robot links and joints. Both forward and inverse kinematic are presented. The simulation software has been developed by using Microsoft visual studio to solve the robot arms kinematic behavior. (author)

  13. The role of visual and direct force feedback in robotics-assisted mitral valve annuloplasty.

    Science.gov (United States)

    Currie, Maria E; Talasaz, Ali; Rayman, Reiza; Chu, Michael W A; Kiaii, Bob; Peters, Terry; Trejos, Ana Luisa; Patel, Rajni

    2017-09-01

    The objective of this work was to determine the effect of both direct force feedback and visual force feedback on the amount of force applied to mitral valve tissue during ex vivo robotics-assisted mitral valve annuloplasty. A force feedback-enabled master-slave surgical system was developed to provide both visual and direct force feedback during robotics-assisted cardiac surgery. This system measured the amount of force applied by novice and expert surgeons to cardiac tissue during ex vivo mitral valve annuloplasty repair. The addition of visual (2.16 ± 1.67), direct (1.62 ± 0.86), or both visual and direct force feedback (2.15 ± 1.08) resulted in lower mean maximum force applied to mitral valve tissue while suturing compared with no force feedback (3.34 ± 1.93 N; P forces on cardiac tissue during robotics-assisted mitral valve annuloplasty suturing, force feedback may be required. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Image navigation as a means to expand the boundaries of fluorescence-guided surgery.

    Science.gov (United States)

    Brouwer, Oscar R; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L; Wendler, Thomas; Valdés-Olmos, Renato A; van der Poel, Henk G; van Leeuwen, Fijs W B

    2012-05-21

    Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.

  15. Percutaneous Sacroiliac Screw Placement: A Prospective Randomized Comparison of Robot-assisted Navigation Procedures with a Conventional Technique

    Science.gov (United States)

    Wang, Jun-Qiang; Wang, Yu; Feng, Yun; Han, Wei; Su, Yong-Gang; Liu, Wen-Yong; Zhang, Wei-Jun; Wu, Xin-Bao; Wang, Man-Yi; Fan, Yu-Bo

    2017-01-01

    wire attempts in the robot-assisted group was significantly less than that in the freehand group (median [Q1, Q3]: 1.0 [1.0,1.0] time vs. median [Q1, Q3]: 7.0 [1.0, 9.0] times; χ2 = 15.771, respectively, P < 0.001). The instrumented SI levels did not differ between both groups (from S1 to S2, χ2 = 4.760, P = 0.093). Conclusions: Accuracy of the robot-assisted technique was superior to that of the freehand technique. Robot-assisted navigation is safe for unstable posterior pelvic ring stabilization, especially in S1, but also in S2. SI screw insertion with robot-assisted navigation is clinically feasible. PMID:29067950

  16. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  17. Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation.

    Science.gov (United States)

    Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien

    2018-01-11

    Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.

  18. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  19. New Control Paradigms for Resources Saving: An Approach for Mobile Robots Navigation

    Directory of Open Access Journals (Sweden)

    Rafael Socas

    2018-01-01

    Full Text Available In this work, an event-based control scheme is presented. The proposed system has been developed to solve control problems appearing in the field of Networked Control Systems (NCS. Several models and methodologies have been proposed to measure different resources consumptions. The use of bandwidth, computational load and energy resources have been investigated. This analysis shows how the parameters of the system impacts on the resources efficiency. Moreover, the proposed system has been compared with its equivalent discrete-time solution. In the experiments, an application of NCS for mobile robots navigation has been set up and its resource usage efficiency has been analysed.

  20. Performance comparison of novel WNN approach with RBFNN in navigation of autonomous mobile robotic agent

    Directory of Open Access Journals (Sweden)

    Ghosh Saradindu

    2016-01-01

    Full Text Available This paper addresses the performance comparison of Radial Basis Function Neural Network (RBFNN with novel Wavelet Neural Network (WNN of designing intelligent controllers for path planning of mobile robot in an unknown environment. In the proposed WNN, different types of activation functions such as Mexican Hat, Gaussian and Morlet wavelet functions are used in the hidden nodes. The neural networks are trained by an intelligent supervised learning technique so that the robot makes a collision-free path in the unknown environment during navigation from different starting points to targets/goals. The efficiency of two algorithms is compared using some MATLAB simulations and experimental setup with Arduino Mega 2560 microcontroller in terms of path length and time taken to reach the target as an indicator for the accuracy of the network models.

  1. The KCLBOT: Exploiting RGB-D Sensor Inputs for Navigation Environment Building and Mobile Robot Localization

    Directory of Open Access Journals (Sweden)

    Evangelos Georgiou

    2011-09-01

    Full Text Available This paper presents an alternative approach to implementing a stereo camera configuration for SLAM. The approach suggested implements a simplified method using a single RGB-D camera sensor mounted on a maneuverable non-holonomic mobile robot, the KCLBOT, used for extracting image feature depth information while maneuvering. Using a defined quadratic equation, based on the calibration of the camera, a depth computation model is derived base on the HSV color space map. Using this methodology it is possible to build navigation environment maps and carry out autonomous mobile robot path following and obstacle avoidance. This paper presents a calculation model which enables the distance estimation using the RGB-D sensor from Microsoft .NET micro framework device. Experimental results are presented to validate the distance estimation methodology.

  2. Autonomous Integrated Navigation for Indoor Robots Utilizing On-Line Iterated Extended Rauch-Tung-Striebel Smoothing

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2013-11-01

    Full Text Available In order to reduce the estimated errors of the inertial navigation system (INS/Wireless sensor network (WSN-integrated navigation for mobile robots indoors, this work proposes an on-line iterated extended Rauch-Tung-Striebel smoothing (IERTSS utilizing inertial measuring units (IMUs and an ultrasonic positioning system. In this mode, an iterated Extended Kalman filter (IEKF is used in forward data processing of the Extended Rauch-Tung-Striebel smoothing (ERTSS to improve the accuracy of the filtering output for the smoother. Furthermore, in order to achieve the on-line smoothing, IERTSS is embedded into the average filter. For verification, a real indoor test has been done to assess the performance of the proposed method. The results show that the proposed method is effective in reducing the errors compared with the conventional schemes.

  3. Allothetic and idiothetic sensor fusion in rat-inspired robot localization

    Science.gov (United States)

    Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo

    2012-06-01

    We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.

  4. The research on visual industrial robot which adopts fuzzy PID control algorithm

    Science.gov (United States)

    Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye

    2017-03-01

    The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.

  5. Robotic platform for traveling on vertical piping network

    Science.gov (United States)

    Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel; Marzolf, Athneal D

    2015-02-03

    This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.

  6. Visual tables of contents: structure and navigation of digital video material

    NARCIS (Netherlands)

    Janse, M.D.; Das, D.A.D.; Tang, H.K.; Paassen, van R.L.F.

    1997-01-01

    This paper presents a study that was initiated to address the relationship between visualization of content information, the structure of this information and the effective traversal and navigation for users of digital video storage systems in domestic environments. Preliminary results in two topic

  7. Virtual environment to evaluate multimodal feedback strategies for augmented navigation of the visually impaired.

    Science.gov (United States)

    Hara, Masayuki; Shokur, Solaiman; Yamamoto, Akio; Higuchi, Toshiro; Gassert, Roger; Bleuler, Hannes

    2010-01-01

    This paper proposes a novel experimental environment to evaluate multimodal feedback strategies for augmented navigation of the visually impaired. The environment consists of virtual obstacles and walls, an optical tracking system and a simple device with audio and vibrotactile feedback that interacts with the virtual environment, and presents many advantages in terms of safety, flexibility, control over experimental parameters and cost. The subject can freely move in an empty room, while the position of head and arm are tracked in real time. A virtual environment (walls, obstacles) is randomly generated, and audio and vibrotactile feedback are given according to the distance from the subjects arm to the virtual walls/objects. We investigate the applicability of our environment using a simple, commercially available feedback device. Experiments with unimpaired subjects show that it is possible to use the setup to "blindly" navigate in an unpredictable virtual environment. This validates the environment as a test platform to investigate navigation and exploration strategies of the visually impaired, and to evaluate novel technologies for augmented navigation.

  8. Adaptive Human aware Navigation based on Motion Pattern Analysis

    DEFF Research Database (Denmark)

    Tranberg, Søren; Svenstrup, Mikael; Andersen, Hans Jørgen

    2009-01-01

    Respecting people’s social spaces is an important prerequisite for acceptable and natural robot navigation in human environments. In this paper, we describe an adaptive system for mobile robot navigation based on estimates of whether a person seeks to interact with the robot or not. The estimates...... are based on run-time motion pattern analysis compared to stored experience in a database. Using a potential field centered around the person, the robot positions itself at the most appropriate place relative to the person and the interaction status. The system is validated through qualitative tests...

  9. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Directory of Open Access Journals (Sweden)

    Addythia Saphala

    2015-10-01

    Full Text Available Robotic Operation System (ROS is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR, which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM.

  10. Ratbot automatic navigation by electrical reward stimulation based on distance measurement in unknown environments.

    Science.gov (United States)

    Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.

  11. Autonomous Wheeled Robot Platform Testbed for Navigation and Mapping Using Low-Cost Sensors

    Science.gov (United States)

    Calero, D.; Fernandez, E.; Parés, M. E.

    2017-11-01

    This paper presents the concept of an architecture for a wheeled robot system that helps researchers in the field of geomatics to speed up their daily research on kinematic geodesy, indoor navigation and indoor positioning fields. The presented ideas corresponds to an extensible and modular hardware and software system aimed at the development of new low-cost mapping algorithms as well as at the evaluation of the performance of sensors. The concept, already implemented in the CTTC's system ARAS (Autonomous Rover for Automatic Surveying) is generic and extensible. This means that it is possible to incorporate new navigation algorithms or sensors at no maintenance cost. Only the effort related to the development tasks required to either create such algorithms needs to be taken into account. As a consequence, change poses a much small problem for research activities in this specific area. This system includes several standalone sensors that may be combined in different ways to accomplish several goals; that is, this system may be used to perform a variety of tasks, as, for instance evaluates positioning algorithms performance or mapping algorithms performance.

  12. Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes.

    Directory of Open Access Journals (Sweden)

    Trevor Murray

    Full Text Available Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area' has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.

  13. Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes.

    Science.gov (United States)

    Murray, Trevor; Zeil, Jochen

    2017-01-01

    Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area') has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.

  14. Calibration and control for range imaging in mobile robot navigation

    Energy Technology Data Exchange (ETDEWEB)

    Dorum, O.H. [Norges Tekniske Hoegskole, Trondheim (Norway). Div. of Computer Systems and Telematics; Hoover, A. [University of South Florida, Tampa, FL (United States). Dept. of Computer Science and Engineering; Jones, J.P. [Oak Ridge National Lab., TN (United States)

    1994-06-01

    This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of view and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.

  15. Design, Implementation and Evaluation of an Indoor Navigation System for Visually Impaired People.

    Science.gov (United States)

    Martinez-Sala, Alejandro Santos; Losilla, Fernando; Sánchez-Aarnoutse, Juan Carlos; García-Haro, Joan

    2015-12-21

    Indoor navigation is a challenging task for visually impaired people. Although there are guidance systems available for such purposes, they have some drawbacks that hamper their direct application in real-life situations. These systems are either too complex, inaccurate, or require very special conditions (i.e., rare in everyday life) to operate. In this regard, Ultra-Wideband (UWB) technology has been shown to be effective for indoor positioning, providing a high level of accuracy and low installation complexity. This paper presents SUGAR, an indoor navigation system for visually impaired people which uses UWB for positioning, a spatial database of the environment for pathfinding through the application of the A* algorithm, and a guidance module. The interaction with the user takes place using acoustic signals and voice commands played through headphones. The suitability of the system for indoor navigation has been verified by means of a functional and usable prototype through a field test with a blind person. In addition, other tests have been conducted in order to show the accuracy of different relevant parts of the system.

  16. Design, Implementation and Evaluation of an Indoor Navigation System for Visually Impaired People

    Directory of Open Access Journals (Sweden)

    Alejandro Santos Martinez-Sala

    2015-12-01

    Full Text Available Indoor navigation is a challenging task for visually impaired people. Although there are guidance systems available for such purposes, they have some drawbacks that hamper their direct application in real-life situations. These systems are either too complex, inaccurate, or require very special conditions (i.e., rare in everyday life to operate. In this regard, Ultra-Wideband (UWB technology has been shown to be effective for indoor positioning, providing a high level of accuracy and low installation complexity. This paper presents SUGAR, an indoor navigation system for visually impaired people which uses UWB for positioning, a spatial database of the environment for pathfinding through the application of the A* algorithm, and a guidance module. The interaction with the user takes place using acoustic signals and voice commands played through headphones. The suitability of the system for indoor navigation has been verified by means of a functional and usable prototype through a field test with a blind person. In addition, other tests have been conducted in order to show the accuracy of different relevant parts of the system.

  17. Adaptive Landmark-Based Navigation System Using Learning Techniques

    DEFF Research Database (Denmark)

    Zeidan, Bassel; Dasgupta, Sakyasingha; Wörgötter, Florentin

    2014-01-01

    The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. In...... hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.......The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal....... Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex...

  18. Development of an in vivo visual robot system with a magnetic anchoring mechanism and a lens cleaning mechanism for laparoendoscopic single-site surgery (LESS).

    Science.gov (United States)

    Feng, Haibo; Dong, Dinghui; Ma, Tengfei; Zhuang, Jinlei; Fu, Yili; Lv, Yi; Li, Liyi

    2017-12-01

    Surgical robot systems which can significantly improve surgical procedures have been widely used in laparoendoscopic single-site surgery (LESS). For a relative complex surgical procedure, the development of an in vivo visual robot system for LESS can effectively improve the visualization for surgical robot systems. In this work, an in vivo visual robot system with a new mechanism for LESS was investigated. A finite element method (FEM) analysis was carried out to ensure the safety of the in vivo visual robot during the movement, which was the most important concern for surgical purposes. A master-slave control strategy was adopted, in which the control model was established by off-line experiments. The in vivo visual robot system was verified by using a phantom box. The experiment results show that the robot system can successfully realize the expected functionalities and meet the demands of LESS. The experiment results indicate that the in vivo visual robot with high manipulability has great potential in clinical application. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Robotics Potential Fields

    Directory of Open Access Journals (Sweden)

    Jordi Lucero

    2009-01-01

    Full Text Available This problem was to calculate the path a robot would take to navigate an obstacle field and get to its goal. Three obstacles were given as negative potential fields which the robot avoided, and a goal was given a positive potential field that attracted the robot. The robot decided each step based on its distance, angle, and influence from every object. After each step, the robot recalculated and determined its next step until it reached its goal. The robot's calculations and steps were simulated with Microsoft Excel.

  20. Evaluating the effect of three-dimensional visualization on force application and performance time during robotics-assisted mitral valve repair.

    Science.gov (United States)

    Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B

    2013-01-01

    The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P robotic system with either 2D or 3D vision (P robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.

  1. Heuristic Decision-Making for Human-aware Navigation in Domestic Environments

    OpenAIRE

    Kirsch , Alexandra

    2016-01-01

    International audience; Robot navigation in domestic environments is still a challenge. This paper introduces a cognitively inspired decision-making method and an instantiation of it for (local) robot navigation in spatially constrained environments. We compare the method to two existing local planners with respect to efficiency, safety and legibility.

  2. Augmented reality user interface for mobile ground robots with manipulator arms

    Science.gov (United States)

    Vozar, Steven; Tilbury, Dawn M.

    2011-01-01

    Augmented Reality (AR) is a technology in which real-world visual data is combined with an overlay of computer graphics, enhancing the original feed. AR is an attractive tool for teleoperated UGV UIs as it can improve communication between robots and users via an intuitive spatial and visual dialogue, thereby increasing operator situational awareness. The successful operation of UGVs often relies upon both chassis navigation and manipulator arm control, and since existing literature usually focuses on one task or the other, there is a gap in mobile robot UIs that take advantage of AR for both applications. This work describes the development and analysis of an AR UI system for a UGV with an attached manipulator arm. The system supplements a video feed shown to an operator with information about geometric relationships within the robot task space to improve the operator's situational awareness. Previous studies on AR systems and preliminary analyses indicate that such an implementation of AR for a mobile robot with a manipulator arm is anticipated to improve operator performance. A full user-study can determine if this hypothesis is supported by performing an analysis of variance on common test metrics associated with UGV teleoperation.

  3. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    Directory of Open Access Journals (Sweden)

    Juan Hernandez-Vicen

    2018-03-01

    Full Text Available New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator from the University Carlos III of Madrid.

  4. The Use of Robotics to Promote Computing to Pre-College Students with Visual Impairments

    Science.gov (United States)

    Ludi, Stephanie; Reichlmayr, Tom

    2011-01-01

    This article describes an outreach program to broaden participation in computing to include more students with visual impairments. The precollege workshops target students in grades 7-12 and engage students with robotics programming. The use of robotics at the precollege level has become popular in part due to the availability of Lego Mindstorm…

  5. Development of a Novel Locomotion Algorithm for Snake Robot

    International Nuclear Information System (INIS)

    Khan, Raisuddin; Billah, Md Masum; Watanabe, Mitsuru; Shafie, A A

    2013-01-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation

  6. Wireless Visual Sensor Network Robots- Based for the Emulation of Collective Behavior

    Directory of Open Access Journals (Sweden)

    Fredy Hernán Martinez Sarmiento

    2012-03-01

    Full Text Available We consider the problem of bacterial quorum sensing emulate on small mobile robots. Robots that reflect the behavior of bacteria are designed as mobile wireless camera nodes. They are able to structure a dynamic wireless sensor network. Emulated behavior corresponds to a simplification of bacterial quorum sensing, where the action of a network node is conditioned by the population density of robots(nodes in a given area. The population density reading is done visually using a camera. The robot makes an estimate of the population density of the images, and acts according to this information. The operation of the camera is done with a custom firmware, reducing the complexity of the node without loss of performance. It was noted the route planning and the collective behavior of robots without the use of any other external or local communication. Neither was it necessary to develop a model system, precise state estimation or state feedback.

  7. Design and recognition of artificial landmarks for reliable indoor self-localization of mobile robots

    Directory of Open Access Journals (Sweden)

    Xu Zhong

    2017-02-01

    Full Text Available This article presents a self-localization scheme for indoor mobile robot navigation based on reliable design and recognition of artificial visual landmarks. Each landmark is patterned with a set of concentric circular rings in black and white, which reliably encodes the landmark’s identity under environmental illumination. A mobile robot in navigation uses an onboard camera to capture landmarks in the environment. The landmarks in an image are detected and identified using a bilayer recognition algorithm: A global recognition process initially extracts candidate landmark regions across the whole image and tries to identify enough landmarks; if necessary, a local recognition process locally enhances those unidentified regions of interest influenced by illumination and incompleteness and reidentifies them. The recognized landmarks are used to estimate the position and orientation of the onboard camera in the environment, based on the geometric relationship between the image and environmental frames. The experiments carried out in a real indoor environment show high robustness of the proposed landmark design and recognition scheme to the illumination condition, which leads to reliable and accurate mobile robot localization.

  8. Biologically based neural network for mobile robot navigation

    Science.gov (United States)

    Torres Muniz, Raul E.

    1999-01-01

    The new tendency in mobile robots is to crete non-Cartesian system based on reactions to their environment. This emerging technology is known as Evolutionary Robotics, which is combined with the Biorobotic field. This new approach brings cost-effective solutions, flexibility, robustness, and dynamism into the design of mobile robots. It also provides fast reactions to the sensory inputs, and new interpretation of the environment or surroundings of the mobile robot. The Subsumption Architecture (SA) and the action selection dynamics developed by Brooks and Maes, respectively, have successfully obtained autonomous mobile robots initiating this new trend of the Evolutionary Robotics. Their design keeps the mobile robot control simple. This work present a biologically inspired modification of these schemes. The hippocampal-CA3-based neural network developed by Williams Levy is used to implement the SA, while the action selection dynamics emerge from iterations of the levels of competence implemented with the HCA3. This replacement by the HCA3 results in a closer biological model than the SA, combining the Behavior-based intelligence theory with neuroscience. The design is kept simple, and it is implemented in the Khepera Miniature Mobile Robot. The used control scheme obtains an autonomous mobile robot that can be used to execute a mail delivery system and surveillance task inside a building floor.

  9. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    Science.gov (United States)

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  10. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    Science.gov (United States)

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  11. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    Directory of Open Access Journals (Sweden)

    Shaowu Pan

    2015-04-01

    Full Text Available A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT, which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  12. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke.

    Science.gov (United States)

    Secoli, Riccardo; Milot, Marie-Helene; Rosati, Giulio; Reinkensmeyer, David J

    2011-04-23

    Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated

  13. Visual communication system among underwater robots and divers. Kaichu robot ya diver kan no shikaku ni yoru tsushin

    Energy Technology Data Exchange (ETDEWEB)

    Chiba, H. (East Japan Railway Co., Tokyo (Japan)); Ura, T.; Fujii, T. (The University of Tokyo, Tokyo (Japan). Institute of Industrial Science)

    1993-07-01

    Performing coordinated works between underwater robots and divers, often called undersea agents, requires communication means to promote mutual understanding. This paper describes a system to make visual communications as a communication means used under sea, and discusses elementary technologies to realize mutual communications between the agents. The visual communication system comprises a device to indicate command patterns that correspond to intentions to be communicated using five electroluminescence (EL) panels, a CCD camera, and a transponder. Discussions were given on image processing to recognize the command patterns, EL panel positions, and communication protocols. As a result of experiments assuming underwater communications between divers and robots, using a water tank, it was found that the command patterns can be recognized if illuminance in the water tank is 100 lux or lower. Validity of the system was verified in the experiments. 4 refs., 9 figs., 1 tab.

  14. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  15. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

    Science.gov (United States)

    Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika

    2017-06-01

    Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the

  16. Sistema de navegación para un robot limpiador de piscinas

    Directory of Open Access Journals (Sweden)

    Lorena Cardona Rendón

    2014-01-01

    Full Text Available In this paper presents the development of a navigation system to estimate the position, velocity and orientation of a pool cleaner robot to be automated. We employ the weighted least-square technique for the design of the navigation system, which combines the noisy measurements of a tri-axial accelerometer and a gyroscope with the solution to the differential equations that describe the robot's movement. The navigation system was tested using a (Simulink-based model of the robot obtained from a tri-dimensional representation (built with CAD software - Autodesk Inventor. The final part of the paper presents the results and draws some conclusions about the feasibility of implementing the navigation system in the automation of a swimming-pool cleaner robot.

  17. Navigating on handheld displays: Dynamic versus Static Keyhole Navigation

    NARCIS (Netherlands)

    Mehra, S.; Werkhoven, P.; Worring, M.

    2006-01-01

    Handheld displays leave little space for the visualization and navigation of spatial layouts representing rich information spaces. The most common navigation method for handheld displays is static peephole navigation: The peephole is static and we move the spatial layout behind it (scrolling). A

  18. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  19. Expert robots in nuclear plants

    International Nuclear Information System (INIS)

    Byrd, J.S.; Fisher, J.J.; DeVries, K.R.; Martin, T.P.

    1987-01-01

    Expert robots enhance a safety and operations in nuclear plants. E.I. du Pont de Nemours and Company, Savannah River Laboratory, is developing expert mobile robots for deployment in nuclear applications at the Savannah River Plant. Knowledge-based expert systems are being evaluated to simplify operator control, to assist in navigation and manipulation functions, and to analyze sensory information. Development work using two research vehicles is underway to demonstrate semiautonomous, intelligence, expert robot system operation in process areas. A description of the mechanical equipment, control systems, and operating modes is presented, including the integration of onboard sensors. A control hierarchy that uses modest computational methods is being used to allow mobile robots to autonomously navigate and perform tasks in known environments without the need for large computer systems

  20. Expert robots in nuclear plants

    International Nuclear Information System (INIS)

    Byrd, J.S.; Fisher, J.J.; DeVries, K.R.; Martin, T.P.

    1987-01-01

    Expert robots will enhance safety and operations in nuclear plants. E. I. du Pont de Nemours and Company, Savannah River Laboratory, is developing expert mobile robots for deployment in nuclear applications at the Savannah River Plant. Knowledge-based expert systems are being evaluated to simplify operator control, to assist in navigation and manipulation functions, and to analyze sensory information. Development work using two research vehicles is underway to demonstrate semiautonomous, intelligent, expert robot system operation in process areas. A description of the mechanical equipment, control systems, and operating modes is presented, including the integration of onboard sensors. A control hierarchy that uses modest computational methods is being used to allow mobile robots to autonomously navigate and perform tasks in known environments without the need for large computer systems

  1. Path Planning and Replanning for Mobile Robot Navigation on 3D Terrain: An Approach Based on Geodesic

    Directory of Open Access Journals (Sweden)

    Kun-Lin Wu

    2016-01-01

    Full Text Available In this paper, mobile robot navigation on a 3D terrain with a single obstacle is addressed. The terrain is modelled as a smooth, complete manifold with well-defined tangent planes and the hazardous region is modelled as an enclosing circle with a hazard grade tuned radius representing the obstacle projected onto the terrain to allow efficient path-obstacle intersection checking. To resolve the intersections along the initial geodesic, by resorting to the geodesic ideas from differential geometry on surfaces and manifolds, we present a geodesic-based planning and replanning algorithm as a new method for obstacle avoidance on a 3D terrain without using boundary following on the obstacle surface. The replanning algorithm generates two new paths, each a composition of two geodesics, connected via critical points whose locations are found to be heavily relying on the exploration of the terrain via directional scanning on the tangent plane at the first intersection point of the initial geodesic with the circle. An advantage of this geodesic path replanning procedure is that traversability of terrain on which the detour path traverses could be explored based on the local Gauss-Bonnet Theorem of the geodesic triangle at the planning stage. A simulation demonstrates the practicality of the analytical geodesic replanning procedure for navigating a constant speed point robot on a 3D hill-like terrain.

  2. Development Of A Mobile Robot As A Test Bed For Tele-Presentation

    Directory of Open Access Journals (Sweden)

    Diogenes Armando D. Pascua

    2016-01-01

    Full Text Available In this paper a human-sized tracked wheel robot with a large payload capacity for tele-presentation is presented. The robot is equipped with different sensors for obstacle avoidance and localization. A high definition web camera installed atop a pan and tilt assembly was in place as a remote environment feedback for users. An LCD monitor provides the visual display of the operator in the remote environment using the standard Skype teleconferencing software. Remote control was done via the internet through the free Teamviewer VNC remote desktop software. Moreover, this paper presents the design details, fabrication and evaluation of individual components. Core mobile robot movement and navigational controls were developed and tested. The effectiveness of the mobile robot as a test bed for tele-presentation were evaluated and analyzed by way of its real time response and time delay effects of the network.

  3. Development of a Mobile Robot as a Test Bed for Tele-Presentation

    Directory of Open Access Journals (Sweden)

    Diogenes Armando D. Pascua

    2016-05-01

    Full Text Available In this paper a human-sized tracked wheel robot with a large payload capacity for tele-presentation is presented. The robot is equipped with different sensors for obstacle avoidance and localization. A high definition web camera installed atop a pan and tilt assembly was in place as a remote environment feedback for users. An LCD monitor provides the visual display of the operator in the remote environment using the standard Skype teleconferencing software. Remote control was done via the internet through the free Teamviewer VNC remote desktop software. Moreover, this paper presents the design details, fabrication and evaluation of individual components. Core mobile robot movement and navigational controls were developed and tested. The effectiveness of the mobile robot as a test bed for tele-presentation were evaluated and analyzed by way of its real time response and time delay effects of the network

  4. Sensor Fusion for Autonomous Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Plascencia, Alfredo

    Multi-sensor data fusion is a broad area of constant research which is applied to a wide variety of fields such as the field of mobile robots. Mobile robots are complex systems where the design and implementation of sensor fusion is a complex task. But research applications are explored constantl....... The scope of the thesis is limited to building a map for a laboratory robot by fusing range readings from a sonar array with landmarks extracted from stereo vision images using the (Scale Invariant Feature Transform) SIFT algorithm....

  5. Recognition and automatic tracking of weld line in fringe welding by autonomous mobile robot with visual sensor

    International Nuclear Information System (INIS)

    Suga, Yasuo; Saito, Keishin; Ishii, Hideaki.

    1994-01-01

    An autonomous mobile robot with visual sensor and four driving axes for welding of pipe and fringe was constructed. The robot can move along a pipe, and detect the weld line to be welded by visual sensor. Moreover, in order to perform welding automatically, the tip of welding torch can track the weld line of the joint by rotating the robot head. In the case of welding of pipe and fringe, the robot can detect the contact angle between the two base metals to be welded, and the torch angle changes according to the contact angle. As the result of tracking test by the robot system, it was made clear that the recognition of geometry of the joint by the laser lighting method and automatic tracking of weld line were possible. The average tracking error was ±0.3 mm approximately and the torch angle could be always kept at the optimum angle. (author)

  6. Absolute Navigation Information Estimation for Micro Planetary Rovers

    Directory of Open Access Journals (Sweden)

    Muhammad Ilyas

    2016-03-01

    Full Text Available This paper provides algorithms to estimate absolute navigation information, e.g., absolute attitude and position, by using low power, weight and volume Microelectromechanical Systems-type (MEMS sensors that are suitable for micro planetary rovers. Planetary rovers appear to be easily navigable robots due to their extreme slow speed and rotation but, unfortunately, the sensor suites available for terrestrial robots are not always available for planetary rover navigation. This makes them difficult to navigate in a completely unexplored, harsh and complex environment. Whereas the relative attitude and position can be tracked in a similar way as for ground robots, absolute navigation information, unlike in terrestrial applications, is difficult to obtain for a remote celestial body, such as Mars or the Moon. In this paper, an algorithm called the EASI algorithm (Estimation of Attitude using Sun sensor and Inclinometer is presented to estimate the absolute attitude using a MEMS-type sun sensor and inclinometer, only. Moreover, the output of the EASI algorithm is fused with MEMS gyros to produce more accurate and reliable attitude estimates. An absolute position estimation algorithm has also been presented based on these on-board sensors. Experimental results demonstrate the viability of the proposed algorithms and the sensor suite for low-cost and low-weight micro planetary rovers.

  7. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  8. Visual-based simultaneous localization and mapping and global positioning system correction for geo-localization of a mobile robot

    International Nuclear Information System (INIS)

    Berrabah, Sid Ahmed; Baudoin, Yvan; Sahli, Hichem

    2011-01-01

    This paper introduces an approach combining visual-based simultaneous localization and mapping (V-SLAM) and global positioning system (GPS) correction for accurate multi-sensor localization of an outdoor mobile robot in geo-referenced maps. The proposed framework combines two extended Kalman filters (EKF); the first one, referred to as the integration filter, is dedicated to the improvement of the GPS localization based on data from an inertial navigation system and wheels' encoders. The second EKF implements the V-SLAM process. The linear and angular velocities in the dynamic model of the V-SLAM EKF filter are given by the GPS/INS/Encoders integration filter. On the other hand, the output of the V-SLAM EKF filter is used to update the dynamics estimation in the integration filter and therefore the geo-referenced localization. This solution increases the accuracy and the robustness of the positioning during GPS outage and allows SLAM in less featured environments

  9. Visual Trajectory-Tracking Model-Based Control for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Andrej Zdešar

    2013-09-01

    Full Text Available In this paper we present a visual-control algorithm for driving a mobile robot along the reference trajectory. The configuration of the system consists of a two-wheeled differentially driven mobile robot that is observed by an overhead camera, which can be placed at arbitrary, but reasonable, inclination with respect to the ground plane. The controller must be capable of generating appropriate tangential and angular control velocities for the trajectory-tracking problem, based on the information received about the robot position obtained in the image. To be able to track the position of the robot through a sequence of images in real-time, the robot is marked with an artificial marker that can be distinguishably recognized by the image recognition subsystem. Using the property of differential flatness, a dynamic feedback compensator can be designed for the system, thereby extending the system into a linear form. The presented control algorithm for reference tracking combines a feedforward and a feedback loop, the structure also known as a two DOF control scheme. The feedforward part should drive the system to the vicinity of the reference trajectory and the feedback part should eliminate any errors that occur due to noise and other disturbances etc. The feedforward control can never achieve accurate reference following, but this deficiency can be eliminated with the introduction of the feedback loop. The design of the model predictive control is based on the linear error model. The model predictive control is given in analytical form, so the computational burden is kept at a reasonable level for real-time implementation. The control algorithm requires that a reference trajectory is at least twice differentiable function. A suitable approach to design such a trajectory is by exploiting some useful properties of the Bernstein-Bézier parametric curves. The simulation experiments as well as real system experiments on a robot normally used in the

  10. Robotic 4D ultrasound solution for real-time visualization and teleoperation

    Directory of Open Access Journals (Sweden)

    Al-Badri Mohammed

    2017-09-01

    Full Text Available Automation of the image acquisition process via robotic solutions offer a large leap towards resolving ultrasound’s user-dependency. This paper, as part of a larger project aimed to develop a multipurpose 4d-ultrasonic force-sensitive robot for medical applications, focuses on achieving real-time remote visualisation for 4d ultrasound image transfer. This was possible through implementing our software modification on a GE Vivid 7 Dimension workstation, which operates a matrix array probe controlled by a KUKA LBR iiwa 7 7-DOF robotic arm. With the help of robotic positioning and the matrix array probe, fast volumetric imaging of target regions was feasible. By testing ultrasound volumes, which were roughly 880 kB in size, while using gigabit Ethernet connection, a latency of ∼57 ms was achievable for volume transfer between the ultrasound station and a remote client application, which as a result allows a frame count of 17.4 fps. Our modification thus offers for the first time real-time remote visualization, recording and control of 4d ultrasound data, which can be implemented in teleoperation.

  11. Soft tissue navigation for laparoscopic prostatectomy: evaluation of camera pose estimation for enhanced visualization

    Science.gov (United States)

    Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.

    2007-03-01

    We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.

  12. Wavefront Propagation and Fuzzy Based Autonomous Navigation

    Directory of Open Access Journals (Sweden)

    Adel Al-Jumaily

    2005-06-01

    Full Text Available Path planning and obstacle avoidance are the two major issues in any navigation system. Wavefront propagation algorithm, as a good path planner, can be used to determine an optimal path. Obstacle avoidance can be achieved using possibility theory. Combining these two functions enable a robot to autonomously navigate to its destination. This paper presents the approach and results in implementing an autonomous navigation system for an indoor mobile robot. The system developed is based on a laser sensor used to retrieve data to update a two dimensional world model of therobot environment. Waypoints in the path are incorporated into the obstacle avoidance. Features such as ageing of objects and smooth motion planning are implemented to enhance efficiency and also to cater for dynamic environments.

  13. The Robot Path Planning Based on Improved Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2016-01-01

    Full Text Available Path planning is critical to the efficiency and fidelity of robot navigation. The solution of robot path planning is to seek a collision-free and the shortest path from the start node to target node. In this paper, we propose a new improved artificial fish swarm algorithm (IAFSA to process the mobile robot path planning problem in a real environment. In IAFSA, an attenuation function is introduced to improve the visual of standard AFSA and get the balance of global search and local search; also, an adaptive operator is introduced to enhance the adaptive ability of step. Besides, a concept of inertia weight factor is proposed in IAFSA inspired by PSO intelligence algorithm to improve the convergence rate and accuracy of IAFSA. Five unconstrained optimization test functions are given to illustrate the strong searching ability and ideal convergence of IAFSA. Finally, the ROS (robot operation system based experiment is carried out on a Pioneer 3-DX mobile robot; the experiment results also show the superiority of IAFSA.

  14. HYBRID COMMUNICATION NETWORK OF MOBILE ROBOT AND QUAD-COPTER

    Directory of Open Access Journals (Sweden)

    Moustafa M. Kurdi

    2017-01-01

    Full Text Available This paper introduces the design and development of QMRS (Quadcopter Mobile Robotic System. QMRS is a real-time obstacle avoidance capability in Belarus-132N mobile robot with the cooperation of quadcopter Phantom-4. The function of QMRS consists of GPS used by Mobile Robot and image vision and image processing system from both robot and quad-copter and by using effective searching algorithm embedded inside the robot. Having the capacity to navigate accurately is one of the major abilities of a mobile robot to effectively execute a variety of jobs including manipulation, docking, and transportation. To achieve the desired navigation accuracy, mobile robots are typically equipped with on-board sensors to observe persistent features in the environment, to estimate their pose from these observations, and to adjust their motion accordingly. Quadcopter takes off from Mobile Robot, surveys the terrain and transmits the processed Image terrestrial robot. The main objective of research paper is to focus on the full coordination between robot and quadcopter by designing an efficient wireless communication using WIFI. In addition, it identify the method involving the use of vision and image processing system from both robot and quadcopter; analyzing path in real-time and avoiding obstacles based-on the computational algorithm embedded inside the robot. QMRS increases the efficiency and reliability of the whole system especially in robot navigation, image processing and obstacle avoidance due to the help and connection among the different parts of the system.

  15. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    Directory of Open Access Journals (Sweden)

    Reinkensmeyer David J

    2011-04-01

    Full Text Available Abstract Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for

  16. Autonomous Robot Navigation In Public Nature Park

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2005-01-01

    This extended abstract describes a project to make a robot travel autonomously across a public nature park. The challenge is to detect and follow the right path across junctions and open squares avoiding people and obstacles. The robot is equipped with a laser scanner, a (low accuracy) GPS, wheel...

  17. Embedded mobile farm robot for identification of diseased plants

    Science.gov (United States)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  18. Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2009-01-01

    Abstract—This paper introduces a new method to determine a person’s pose based on laser range measurements. Such estimates are typically a prerequisite for any human-aware robot navigation, which is the basis for effective and timeextended interaction between a mobile robot and a human. The robot......’s pose. The resulting pose estimates are used to identify humans who wish to be approached and interacted with. The interaction motion of the robot is based on adaptive potential functions centered around the person that respect the persons social spaces. The method is tested in experiments...

  19. Design of robust robotic proxemic behaviour

    NARCIS (Netherlands)

    Torta, E.; Cuijpers, R.H.; Juola, J.F.; Pol, van der D.; Mutlu, B.; Bartneck, C.; Ham, J.R.C.; Evers, V.; Kanda, T.

    2011-01-01

    Personal robots that share the same space with humans need to be socially acceptable and effective as they interact with people. In this paper we focus our attention on the definition of a behaviour-based robotic architecture that, (1) allows the robot to navigate safely in a cluttered and

  20. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  1. Composite Configuration Interventional Therapy Robot for the Microwave Ablation of Liver Tumors

    Science.gov (United States)

    Cao, Ying-Yu; Xue, Long; Qi, Bo-Jin; Jiang, Li-Pei; Deng, Shuang-Cheng; Liang, Ping; Liu, Jia

    2017-11-01

    The existing interventional therapy robots for the microwave ablation of liver tumors have a poor clinical applicability with a large volume, low positioning speed and complex automatic navigation control. To solve above problems, a composite configuration interventional therapy robot with passive and active joints is developed. The design of composite configuration reduces the size of the robot under the premise of a wide range of movement, and the robot with composite configuration can realizes rapid positioning with operation safety. The cumulative error of positioning is eliminated and the control complexity is reduced by decoupling active parts. The navigation algorithms for the robot are proposed based on solution of the inverse kinematics and geometric analysis. A simulation clinical test method is designed for the robot, and the functions of the robot and the navigation algorithms are verified by the test method. The mean error of navigation is 1.488 mm and the maximum error is 2.056 mm, and the positioning time for the ablation needle is in 10 s. The experimental results show that the designed robot can meet the clinical requirements for the microwave ablation of liver tumors. The composite configuration is proposed in development of the interventional therapy robot for the microwave ablation of liver tumors, which provides a new idea for the structural design of medical robots.

  2. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    Science.gov (United States)

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  3. Medical robotics.

    Science.gov (United States)

    Ferrigno, Giancarlo; Baroni, Guido; Casolo, Federico; De Momi, Elena; Gini, Giuseppina; Matteucci, Matteo; Pedrocchi, Alessandra

    2011-01-01

    Information and communication technology (ICT) and mechatronics play a basic role in medical robotics and computer-aided therapy. In the last three decades, in fact, ICT technology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as surgical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the recovery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algorithms(e.g., machine learning and other cognitive approaches)has largely increased the field of applications of these technologies,making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.

  4. Feature tracking for visual servo based range regulation on a mobile robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2009-11-01

    Full Text Available This poster presents a visual servo approach to straight line range and velocity regulation. The difference in velocity between a lead mobile robot and a follower is regulated through velocity control of the follower, in order to maintain a constant...

  5. SIMULATION OF LANDMARK APPROACH FOR WALL FOLLOWING ALGORITHM ON FIRE-FIGHTING ROBOT USING V-REP

    Directory of Open Access Journals (Sweden)

    Sumarsih Condroayu Purbarani

    2015-08-01

    Full Text Available Autonomous mobile robot has been implemented to assist humans in their daily activity. Autonomous robots have also contributed significantly in human safety. Autonomous mobile robot have been implemented to assist humans in their daily activity. Autonomous robots Have also contributed significantly in human safety. An example of the autonomous robot in the human safety sector is the fire fighting robot, which is the main topic of this paper. As an autonomous robot, the fire fighting robot needs a robust navigation ability to execute a given task in the shortest time interval. Wall-following algorithm is one of several navigating algorithm that simplifies this autonomous navigation problem. As a contribution, we propose two methods that could be combined to make the existing wall-following algorithm more robust. The combined wall-flowing algorithm will be compared to the original wall-following algorithm. By doing so, we could determine which method has more impact on the robot’s navigation robustness. Our goal is to see which method is more effective when combined with the wall-following algorithm.

  6. Dynamaid, an Anthropomorphic Robot for Research on Domestic Service Applications

    OpenAIRE

    Stückler, Jörg; Behnke, Sven

    2011-01-01

    Domestic tasks require three main skills from autonomous robots: robust navigation, object manipulation, and intuitive communication with the users. Most robot platforms, however, support only one or two of the above skills. In this paper we present Dynamaid, a robot platform for research on domestic service applications. For robust navigation, Dynamaid has a base with four individually steerable differential wheel pairs, which allow omnidirectional motion. For mobile manipulation, Dynamaid i...

  7. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System.

    Science.gov (United States)

    Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan

    2017-09-10

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

  8. Virtual Simulator for Autonomous Mobile Robots Navigation System Using Concepts of Control Rapid Prototyping

    Directory of Open Access Journals (Sweden)

    Leonimer Flavio de Melo

    2013-09-01

    Full Text Available This work presents the proposal of virtual environment implementation for project simulation and conception of supervision and control systems for mobile robots, that are capable to operate and adapting in different environments and conditions. This virtual system has as purpose to facilitate the development of embedded architecture systems, emphasizing the implementation of tools that allow the simulation of the kinematic conditions, dynamic and control, with real time monitoring of all important system points. For this, open control architecture is proposal, integrating the two main techniques of robotic control implementation in the hardware level: systems microprocessors and reconfigurable hardware devices. The implemented simulator system is composed of a trajectory generating module, a kinematic and dynamic simulator module and of a analysis module of results and errors. The kinematic and dynamic simulator module makes all simulation of the mobile robot following the pre-determined trajectory of the trajectory generator. All the kinematic and dynamic results shown during the simulation can be evaluated and visualized in graphs and tables formats, in the results analysis module, allowing an improvement in the system, minimizing the errors with the necessary adjustments optimization. For controller implementation in the embedded system, it uses the rapid prototyping, which is the technology that allows, in set with the virtual simulation environment, the development of a controller project for mobile robots. The validation and tests had been accomplishing with nonholonomics mobile robots models with differential transmission.

  9. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Mazen Ghandour

    2017-06-01

    Full Text Available In this paper, a novel approach for collision avoidance for indoor mobile robots based on human-robot interaction is realized. The main contribution of this work is a new technique for collision avoidance by engaging the human and the robot in generating new collision-free paths. In mobile robotics, collision avoidance is critical for the success of the robots in implementing their tasks, especially when the robots navigate in crowded and dynamic environments, which include humans. Traditional collision avoidance methods deal with the human as a dynamic obstacle, without taking into consideration that the human will also try to avoid the robot, and this causes the people and the robot to get confused, especially in crowded social places such as restaurants, hospitals, and laboratories. To avoid such scenarios, a reactive-supervised collision avoidance system for mobile robots based on human-robot interaction is implemented. In this method, both the robot and the human will collaborate in generating the collision avoidance via interaction. The person will notify the robot about the avoidance direction via interaction, and the robot will search for the optimal collision-free path on the selected direction. In case that no people interacted with the robot, it will select the navigation path autonomously and select the path that is closest to the goal location. The humans will interact with the robot using gesture recognition and Kinect sensor. To build the gesture recognition system, two models were used to classify these gestures, the first model is Back-Propagation Neural Network (BPNN, and the second model is Support Vector Machine (SVM. Furthermore, a novel collision avoidance system for avoiding the obstacles is implemented and integrated with the HRI system. The system is tested on H20 robot from DrRobot Company (Canada and a set of experiments were implemented to report the performance of the system in interacting with the human and avoiding

  10. Obstacle Avoidance of a Mobile Robot with Hierarchical Structure

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chan Gyu [Yeungnam College of Science and Technolgy, Taegu (Korea)

    2001-06-01

    This paper proposed a new hierarchical fuzzy-neural network algorithm for navigation of a mobile robot within unknown dynamic environment. Proposed navigation algorithm used the learning ability of the neural network and the feasibility of control highly nonlinear system of fuzzy theory. The proposed navigation algorithm used fuzzy algorithm for goal approach and fuzzy-network for effective collision avoidance. Some computer simulation results for a mobile robot equipped with ultrasonic range sensors show that the suggested navigation algorithm is very effective to escape in stationary and moving obstacles environment. (author). 11 refs., 14 figs., 2 tabs.

  11. Navigation-aided visualization of lumbosacral nerves for anterior sacroiliac plate fixation: a case report.

    Science.gov (United States)

    Takao, Masaki; Nishii, Takashi; Sakai, Takashi; Sugano, Nobuhiko

    2014-06-01

    Anterior sacroiliac joint plate fixation for unstable pelvic ring fractures avoids soft tissue problems in the buttocks; however, the lumbosacral nerves lie in close proximity to the sacroiliac joint and may be injured during the procedure. A 49 year-old woman with a type C pelvic ring fracture was treated with an anterior sacroiliac plate using a computed tomography (CT)-three-dimensional (3D)-fluoroscopy matching navigation system, which visualized the lumbosacral nerves as well as the iliac and sacral bones. We used a flat panel detector 3D C-arm, which made it possible to superimpose our preoperative CT-based plan on the intra-operative 3D-fluoroscopic images. No postoperative complications were noted. Intra-operative lumbosacral nerve visualization using computer navigation was useful to recognize the 'at-risk' area for nerve injury during anterior sacroiliac plate fixation. Copyright © 2013 John Wiley & Sons, Ltd.

  12. New Design of Mobile Robot Path Planning with Randomly Moving Obstacles

    Directory of Open Access Journals (Sweden)

    T. A. Salih

    2013-05-01

    Full Text Available The navigation of a mobile robot in an unknown environment has always been a very challenging task. In order to achieve safe and autonomous navigation, the mobile robot needs to sense the surrounding environment and plans a collision-free path. This paper focuses on designing and implementing a mobile robot which has the ability of navigating smoothly in an unknown environment, avoiding collisions, without having to stop in front of obstacles, detecting leakage of combustible gases and transmitting a message of detection results to the civil defense unit automatically through the Internet to the E-mail. This design uses the implementation of artificial neural network (ANN on a new technology represented by Field Programmable Analog Array (FPAA for controlling the motion of the robot. The robot with the proposed controller is tested and has completed the required objective successfully.

  13. A neural network-based exploratory learning and motor planning system for co-robots

    Directory of Open Access Journals (Sweden)

    Byron V Galbraith

    2015-07-01

    Full Text Available Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or learning by doing, an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  14. A neural network-based exploratory learning and motor planning system for co-robots.

    Science.gov (United States)

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  15. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  16. On Estimation Of The Orientation Of Mobile Robots Using Turning Functions And SONAR Information

    Directory of Open Access Journals (Sweden)

    Dorel AIORDACHIOAIE

    2003-12-01

    Full Text Available SONAR systems are widely used by some artificial objects, e.g. robots, and by animals, e.g. bats, for navigation and pattern recognition. The objective of this paper is to present a solution on the estimation of the orientation in the environment of mobile robots, in the context of navigation, using the turning function approach. The results are shown to be accurate and can be used further in the design of navigation strategies of mobile robots.

  17. Toward perception-based navigation using EgoSphere

    Science.gov (United States)

    Kawamura, Kazuhiko; Peters, R. Alan; Wilkes, Don M.; Koku, Ahmet B.; Sekman, Ali

    2002-02-01

    A method for perception-based egocentric navigation of mobile robots is described. Each robot has a local short-term memory structure called the Sensory EgoSphere (SES), which is indexed by azimuth, elevation, and time. Directional sensory processing modules write information on the SES at the location corresponding to the source direction. Each robot has a partial map of its operational area that it has received a priori. The map is populated with landmarks and is not necessarily metrically accurate. Each robot is given a goal location and a route plan. The route plan is a set of via-points that are not used directly. Instead, a robot uses each point to construct a Landmark EgoSphere (LES) a circular projection of the landmarks from the map onto an EgoSphere centered at the via-point. Under normal circumstances, the LES will be mostly unaffected by slight variations in the via-point location. Thus, the route plan is transformed into a set of via-regions each described by an LES. A robot navigates by comparing the next LES in its route plan to the current contents of its SES. It heads toward the indicated landmarks until its SES matches the LES sufficiently to indicate that the robot is near the suggested via-point. The proposed method is particularly useful for enabling the exchange of robust route informa-tion between robots under low data rate communications constraints. An example of such an exchange is given.

  18. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  19. A neural model of motion processing and visual navigation by cortical area MST.

    Science.gov (United States)

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  20. Cyclone: A laser scanner for mobile robot navigation

    Science.gov (United States)

    Singh, Sanjiv; West, Jay

    1991-09-01

    Researchers at Carnegie Mellon's Field Robotics Center have designed and implemented a scanning laser rangefinder. The device uses a commercially available time-of-flight ranging instrument that is capable of making up to 7200 measurements per second. The laser beam is reflected by a rotating mirror, producing up to a 360 degree view. Mounted on a robot vehicle, the scanner can be used to detect obstacles in the vehicle's path or to locate the robot on a map. This report discusses the motivation, design, and some applications of the scanner.

  1. Parsimonious Ways to Use Vision for Navigation

    Directory of Open Access Journals (Sweden)

    Paul Graham

    2012-05-01

    Full Text Available The use of visual information for navigation appears to be a universal strategy for sighted animals, amongst which, one particular group of expert navigators are the ants. The broad interest in studies of ant navigation is in part due to their small brains, thus biomimetic engineers expect to be impressed by elegant control solutions, and psychologists might hope for a description of the minimal cognitive requirements for complex spatial behaviours. In this spirit, we have been taking an interdisciplinary approach to the visual guided navigation of ants in their natural habitat. Behavioural experiments and natural image statistics show that visual navigation need not depend on the remembering or recognition of objects. Further modelling work suggests how simple behavioural routines might enable navigation using familiarity detection rather than explicit recall, and we present a proof of concept that visual navigation using familiarity can be achieved without specifying when or what to learn, nor separating routes into sequences of waypoints. We suggest that our current model represents the only detailed and complete model of insect route guidance to date. What's more, we believe the suggested mechanisms represent useful parsimonious hypotheses for the visually guided navigation in larger-brain animals.

  2. Robustness of Visual Place Cells in Dynamic Indoor and Outdoor Environment

    Directory of Open Access Journals (Sweden)

    C. Giovannangeli

    2006-06-01

    Full Text Available In this paper, a model of visual place cells (PCs based on precise neurobiological data is presented. The robustness of the model in real indoor and outdoor environments is tested. Results show that the interplay between neurobiological modelling and robotic experiments can promote the understanding of the neural structures and the achievement of robust robot navigation algorithms. Short Term Memory (STM, soft competition and sparse coding are important for both landmark identification and computation of PC activities. The extension of the paradigm to outdoor environments has confirmed the robustness of the vision-based model and pointed to improvements in order to further foster its performance.

  3. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  4. Essential technologies for developing human and robot collaborative system

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1997-10-01

    In this study, we aim to develop a concept of new robot system, i.e., 'human and robot collaborative system', for the patrol of nuclear power plants. This paper deals with the two essential technologies developed for the system. One is the autonomous navigation program with human intervention function which is indispensable for human and robot collaboration. The other is the position estimation method by using gyroscope and TV image to make the estimation accuracy much higher for safe navigation. Feasibility of the position estimation method is evaluated by experiment and numerical simulation. (author)

  5. Tactile object exploration using cursor navigation sensors

    DEFF Research Database (Denmark)

    Kraft, Dirk; Bierbaum, Alexander; Kjaergaard, Morten

    2009-01-01

    In robotic applications tactile sensor systems serve the purpose of localizing a contact point and measuring contact forces. We have investigated the applicability of a sensorial device commonly used in cursor navigation technology for tactile sensing in robotics. We show the potential of this se......In robotic applications tactile sensor systems serve the purpose of localizing a contact point and measuring contact forces. We have investigated the applicability of a sensorial device commonly used in cursor navigation technology for tactile sensing in robotics. We show the potential...... of this sensor for active haptic exploration. More specifically, we present experiments and results which demonstrate the extraction of relevant object properties such as local shape, weight and elasticity using this technology. Besides its low price due to mass production and its modularity, an interesting...... aspect of this sensor is that beside a localization of contact points and measurement of the contact normal force also shear forces can be measured. This is relevant for many applications such as surface normal estimation and weight measurements. Scalable tactile sensor arrays have been developed...

  6. Mixed Marker-Based/Marker-Less Visual Odometry System for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Fabrizio Lamberti

    2013-05-01

    Full Text Available Abstract When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision-based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision-based odometry algorithm, which is capable of estimating the relative frame-to-frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off-the-shelf quadrotor via extensive experimental tests.

  7. Underground mining robot: a CSIR project

    CSIR Research Space (South Africa)

    Green, JJ

    2012-11-01

    Full Text Available The Council for Scientific and Industrial Research (CSIR) in South Africa is currently developing a robot for the inspection of the ceiling (hanging-wall) in an underground gold mine. The robot autonomously navigates the 30 meter long by 3 meter...

  8. Robot vision language RVL/V: An integration scheme of visual processing and manipulator control

    International Nuclear Information System (INIS)

    Matsushita, T.; Sato, T.; Hirai, S.

    1984-01-01

    RVL/V is a robot vision language designed to write a program for visual processing and manipulator control of a hand-eye system. This paper describes the design of RVL/V and the current implementation of the system. Visual processing is performed on one-dimensional range data of the object surface. Model-based instructions execute object detection, measurement and view control. The hierarchy of visual data and processing is introduced to give RVL/V generality. A new scheme to integrate visual information and manipulator control is proposed. The effectiveness of the model-based visual processing scheme based on profile data is demonstrated by a hand-eye experiment

  9. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  10. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  11. An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry

    Science.gov (United States)

    Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro

    This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.

  12. An overview on real-time control schemes for wheeled mobile robot

    Science.gov (United States)

    Radzak, M. S. A.; Ali, M. A. H.; Sha’amri, S.; Azwan, A. R.

    2018-04-01

    The purpose of this paper is to review real-time control motion algorithms for wheeled mobile robot (WMR) when navigating in environment such as road. Its need a good controller to avoid collision with any disturbance and maintain a track error at zero level. The controllers are used with other aiding sensors to measure the WMR’s velocities, posture, and interference to estimate the required torque to be applied on the wheels of mobile robot. Four main categories for wheeled mobile robot control systems have been found in literature which are namely: Kinematic based controller, Dynamic based controllers, artificial intelligence based control system, and Active Force control. A MATLAB/Simulink software is the main software to simulate and implement the control system. The real-time toolbox in MATLAB/SIMULINK are used to receive/send data from sensors/to actuator with presence of disturbances, however others software such C, C++ and visual basic are rare to be used.

  13. Smart Material-Actuated Flexible Tendon-Based Snake Robot

    Directory of Open Access Journals (Sweden)

    Mohiuddin Ahmed

    2016-05-01

    Full Text Available A flexible snake robot has better navigation ability compare with the existing electrical motor-based rigid snake robot, due to its excellent bending capability during navigation inside a narrow maze. This paper discusses the modelling, simulation and experiment of a flexible snake robot. The modelling consists of the kinematic analysis and the dynamic analysis of the snake robot. A platform based on the Incompletely Restrained Positioning Mechanism (IRPM is proposed, which uses the external force provided by a compliant flexible beam in each of the actuators. The compliant central column allows the configuration to achieve three degrees of freedom (3DOFs with three tendons. The proposed flexible snake robot has been built using smart material, such as electroactive polymers (EAPs, which can be activated by applying power to it. Finally, the physical prototype of the snake robot has been built. An experiment has been performed in order to justify the proposed model.

  14. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  15. A novel visual-inertial monocular SLAM

    Science.gov (United States)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  16. Multi-Sensor Localization and Navigation for Remote Manipulation in Smoky Areas

    Directory of Open Access Journals (Sweden)

    Jose Vicente Marti

    2013-04-01

    Full Text Available When localizing mobile sensors and actuators in indoor environments laser meters, ultrasonic meters or even image processing techniques are usually used. On the other hand, in smoky conditions, due to a fire or building collapse, once the smoke or dust density grows, optical methods are not efficient anymore. In these scenarios other type of sensors must be used, such as sonar, radar or radiofrequency signals. Indoor localization in low-visibility conditions due to smoke is one of the EU GUARDIANS [1] project goals. The developed method aims to position a robot in front of doors, fire extinguishers and other points of interest with enough accuracy to allow a human operator to manipulate the robot's arm in order to actuate over the element. In coarse-grain localization, a fingerprinting technique based on ZigBee and WiFi signals is used, allowing the robot to navigate inside the building in order to get near the point of interest that requires manipulation. In fine-grained localization a remotely controlled programmable high intensity LED panel is used, which acts as a reference to the system in smoky conditions. Then, smoke detection and visual fine-grained localization are used to position the robot with precisely in the manipulation point (e.g., doors, valves, etc..

  17. Navigating the pathway to robotic competency in general thoracic surgery.

    Science.gov (United States)

    Seder, Christopher W; Cassivi, Stephen D; Wigle, Dennis A

    2013-01-01

    Although robotic technology has addressed many of the limitations of traditional videoscopic surgery, robotic surgery has not gained widespread acceptance in the general thoracic community. We report our initial robotic surgery experience and propose a structured, competency-based pathway for the development of robotic skills. Between December 2008 and February 2012, a total of 79 robot-assisted pulmonary, mediastinal, benign esophageal, or diaphragmatic procedures were performed. Data on patient characteristics and perioperative outcomes were retrospectively collected and analyzed. During the study period, one surgeon and three residents participated in a triphasic, competency-based pathway designed to teach robotic skills. The pathway consisted of individual preclinical learning followed by mentored preclinical exercises and progressive clinical responsibility. The robot-assisted procedures performed included lung resection (n = 38), mediastinal mass resection (n = 19), hiatal or paraesophageal hernia repair (n = 12), and Heller myotomy (n = 7), among others (n = 3). There were no perioperative mortalities, with a 20% complication rate and a 3% readmission rate. Conversion to a thoracoscopic or open approach was required in eight pulmonary resections to facilitate dissection (six) or to control hemorrhage (two). Fewer major perioperative complications were observed in the later half of the experience. All residents who participated in the thoracic surgery robotic pathway perform robot-assisted procedures as part of their clinical practice. Robot-assisted thoracic surgery can be safely learned when skill acquisition is guided by a structured, competency-based pathway.

  18. Fuzzy Behavior Modulation with Threshold Activation for Autonomous Vehicle Navigation

    Science.gov (United States)

    Tunstel, Edward

    2000-01-01

    This paper describes fuzzy logic techniques used in a hierarchical behavior-based architecture for robot navigation. An architectural feature for threshold activation of fuzzy-behaviors is emphasized, which is potentially useful for tuning navigation performance in real world applications. The target application is autonomous local navigation of a small planetary rover. Threshold activation of low-level navigation behaviors is the primary focus. A preliminary assessment of its impact on local navigation performance is provided based on computer simulations.

  19. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    Science.gov (United States)

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual

  20. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study

    Directory of Open Access Journals (Sweden)

    Nocchi Federico

    2012-07-01

    Full Text Available Abstract Background The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb and non-biological (abstract object movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. Methods A visual functional Magnetic Resonance Imaging (fMRI task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. Results The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes. Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. Conclusions This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain’s ability to assimilate abstract object movements with human motor gestures. In both conditions

  1. Wireless Cortical Brain-Machine Interface for Whole-Body Navigation in Primates

    Science.gov (United States)

    Rajangam, Sankaranarayani; Tseng, Po-He; Yin, Allen; Lehew, Gary; Schwarz, David; Lebedev, Mikhail A.; Nicolelis, Miguel A. L.

    2016-03-01

    Several groups have developed brain-machine-interfaces (BMIs) that allow primates to use cortical activity to control artificial limbs. Yet, it remains unknown whether cortical ensembles could represent the kinematics of whole-body navigation and be used to operate a BMI that moves a wheelchair continuously in space. Here we show that rhesus monkeys can learn to navigate a robotic wheelchair, using their cortical activity as the main control signal. Two monkeys were chronically implanted with multichannel microelectrode arrays that allowed wireless recordings from ensembles of premotor and sensorimotor cortical neurons. Initially, while monkeys remained seated in the robotic wheelchair, passive navigation was employed to train a linear decoder to extract 2D wheelchair kinematics from cortical activity. Next, monkeys employed the wireless BMI to translate their cortical activity into the robotic wheelchair’s translational and rotational velocities. Over time, monkeys improved their ability to navigate the wheelchair toward the location of a grape reward. The navigation was enacted by populations of cortical neurons tuned to whole-body displacement. During practice with the apparatus, we also noticed the presence of a cortical representation of the distance to reward location. These results demonstrate that intracranial BMIs could restore whole-body mobility to severely paralyzed patients in the future.

  2. Modelling and testing proxemic behaviour for humanoid robots

    NARCIS (Netherlands)

    Torta, E.; Cuijpers, R.H.; Juola, J.F.; Pol, van der D.

    2012-01-01

    Humanoid robots that share the same space with humans need to be socially acceptable and effective as they interact with people. In this paper we focus our attention on the definition of a behavior-based robotic architecture that (1) allows the robot to navigate safely in a cluttered and dynamically

  3. Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions

    Directory of Open Access Journals (Sweden)

    William Benn

    2012-01-01

    Full Text Available This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space: this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings show that the algorithm performed strongly on solid coloured carpets, wooden, and concrete floors but had difficulty in separating colours in multicoloured floor types such as patterned carpets.

  4. Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy.

    Science.gov (United States)

    Pessaux, Patrick; Diana, Michele; Soler, Luc; Piardi, Tullio; Mutter, Didier; Marescaux, Jacques

    2015-04-01

    Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.

  5. Optic flow-based collision-free strategies: From insects to robots.

    Science.gov (United States)

    Serres, Julien R; Ruffier, Franck

    2017-09-01

    Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. A Robotic Guide for Blind People. Part 1. A Multi-National Survey of the Attitudes, Requirements and Preferences of Potential End-Users

    Directory of Open Access Journals (Sweden)

    Marion A. Hersh

    2010-01-01

    Full Text Available This paper reports the results of a multi-national survey in several different countries on the attitudes, requirements and preferences of blind and visually impaired people for a robotic guide. The survey is introduced by a brief overview of existing work on robotic travel aids and other mobile robotic devices. The questionnaire comprises three sections on personal information about respondents, existing use of mobility and navigation devices and the functions and other features of a robotic guide. The survey found that respondents were very interested in the robotic guide having a number of different functions and being useful in a wide range of circumstances. They considered the robot's appearance to be very important but did not like any of the proposed designs. From their comments, respondents wanted the robot to be discreet and inconspicuous, small, light weight and portable, easy to use, robust to damage, require minimal maintenance, have a long life and a long battery life.

  7. Exploring child-robot engagement in a collaborative task

    NARCIS (Netherlands)

    Zaga, Cristina; Truong, Khiet Phuong; Lohse, M.; Evers, Vanessa

    Imagine a room with toys scattered on the floor and a robot that is motivating a small group of children to tidy up. This scenario poses real-world challenges for the robot, e.g., the robot needs to navigate autonomously in a cluttered environment, it needs to classify and grasp objects, and it

  8. Prototype Robot Pemadam Api Beroda Menggunakan Teknik Navigasi Wall Follower

    Directory of Open Access Journals (Sweden)

    Ery Safrianti

    2012-10-01

    Full Text Available Fire Robot serves to detect and extinguish the fire. The robot is controlled by the microcontroller ATMEGA8535 automatically. This robot contains of several sensors, such as 5 sets of ping parallax as a robot navigator, a set UVTron equipped with fire-detecting driver, DC motor driver L298 with two DC servo motors. The robot was developed from a prototype that has been studied previously with the addition on the hardware side of the sound activation and two sets of line detector. The robot will active if it receives input from the sound activation unit and will start to find the fire with “search the wall” navigation techniques. The line sensor was used as a door and home detector and circle the fire area.To extinguish the fire, this robot uses a fan driven by a BD139 transistor circuit. The overall test results show that the robot can detect the presence of the fire in each room. The robot also can find the fire and extinguish it within 1 minute.

  9. Robotics and remote systems applications

    International Nuclear Information System (INIS)

    Rabold, D.E.

    1996-01-01

    This article is a review of numerous remote inspection techniques in use at the Savannah River (and other) facilities. These include: (1) reactor tank inspection robot, (2) californium waste removal robot, (3) fuel rod lubrication robot, (4) cesium source manipulation robot, (5) tank 13 survey and decontamination robots, (6) hot gang valve corridor decontamination and junction box removal robots, (7) lead removal from deionizer vessels robot, (8) HB line cleanup robot, (9) remote operation of a front end loader at WIPP, (10) remote overhead video extendible robot, (11) semi-intelligent mobile observing navigator, (12) remote camera systems in the SRS canyons, (13) cameras and borescope for the DWPF, (14) Hanford waste tank camera system, (15) in-tank precipitation camera system, (16) F-area retention basin pipe crawler, (17) waste tank wall crawler and annulus camera, (18) duct inspection, and (19) deionizer resin sampling

  10. Autonomous navigation system and method

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  11. Training Revising Based Traversability Analysis of Complex Terrains for Mobile Robot

    Directory of Open Access Journals (Sweden)

    Rui Song

    2014-05-01

    Full Text Available Traversability analysis is one of the core issues in the autonomous navigation for mobile robots to identify the accessible area by the information of sensors on mobile robots. This paper proposed a model to analyze the traversability of complex terrains based on rough sets and training revising. The model described the traversability for mobile robots by traversability cost. Through the experiment, the paper gets the conclusion that traversability analysis model based on rough sets and training revising can be used where terrain features are rich and complex, can effectively handle the unstructured environment, and can provide reliable and effective decision rules in the autonomous navigation for mobile robots.

  12. Conference on Space and Military Applications of Automation and Robotics

    Science.gov (United States)

    1988-01-01

    Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.

  13. Robotics and Virtual Reality for Cultural Heritage Digitization and Fruition

    Science.gov (United States)

    Calisi, D.; Cottefoglie, F.; D'Agostini, L.; Giannone, F.; Nenci, F.; Salonia, P.; Zaratti, M.; Ziparo, V. A.

    2017-05-01

    In this paper we present our novel approach for acquiring and managing digital models of archaeological sites, and the visualization techniques used to showcase them. In particular, we will demonstrate two technologies: our robotic system for digitization of archaeological sites (DigiRo) result of over three years of efforts by a group of cultural heritage experts, computer scientists and roboticists, and our cloud-based archaeological information system (ARIS). Finally we describe the viewers we developed to inspect and navigate the 3D models: a viewer for the web (ROVINA Web Viewer) and an immersive viewer for Virtual Reality (ROVINA VR Viewer).

  14. Brain Computer Interface for Micro-controller Driven Robot Based on Emotiv Sensors

    Directory of Open Access Journals (Sweden)

    Parth Gargava

    2017-08-01

    Full Text Available A Brain Computer Interface (BCI is developed to navigate a micro-controller based robot using Emotiv sensors. The BCI system has a pipeline of 5 stages- signal acquisition, pre-processing, feature extraction, classification and CUDA inter- facing. It shall aid in serving a prototype for physical movement of neurological patients who are unable to control or operate on their muscular movements. All stages of the pipeline are designed to process bodily actions like eye blinks to command navigation of the robot. This prototype works on features learning and classification centric techniques using support vector machine. The suggested pipeline, ensures successful navigation of a robot in four directions in real time with accuracy of 93 percent.

  15. IntelliTable: Inclusively-Designed Furniture with Robotic Capabilities.

    Science.gov (United States)

    Prescott, Tony J; Conran, Sebastian; Mitchinson, Ben; Cudd, Peter

    2017-01-01

    IntelliTable is a new proof-of-principle assistive technology system with robotic capabilities in the form of an elegant universal cantilever table able to move around by itself, or under user control. We describe the design and current capabilities of the table and the human-centered design methodology used in its development and initial evaluation. The IntelliTable study has delivered robotic platform programmed by a smartphone that can navigate around a typical home or care environment, avoiding obstacles, and positioning itself at the user's command. It can also be configured to navigate itself to pre-ordained places positions within an environment using ceiling tracking, responsive optical guidance and object-based sonar navigation.

  16. Prototype of Remote Controlled Robot Vehicle to Scan Radioactive Contaminated Areas

    International Nuclear Information System (INIS)

    Ratongasoandrazana, J.B.; Raoelina Andriambololona; Rambolamanana, G.; Andrianiaina, H.; Rajaobelison, J.

    2016-01-01

    The ionizing radiations are not directly audible by the organs of sense of the human being. Maintenance and handling of sources of such ionizing radiations present some risks of very serious and often irreversible accident for human organism. The works of experimentation and maintenance in such zone also present the risks requiring some minimum of precaution. Thus, the main objective of this work is to design and develop (hard- and software) a prototype of educational semi-autonomous Radio Frequency controlled robot-vehicle based on 8-bit AVR-RISC Flash microcontroller system (ATmega128L) able to detect, identify and map the radioactive contaminated area. An integrated video camera coupled with a UHF video transmitter module, placed in front of the robot, will be used as visual feedback control to well direct it toward a precise place to reach. The navigation information and the data collected are transmitted from the robot toward the Computer via 02 Radio Frequency Transceivers for peer-to-peer serial data transfer in half-duplex mode. A Joystick module which is connected to the Computer parallel port allows full motion control of the platform. Robot-vehicle user interface program for the PC has been designed to allow full control of all functions of the robot vehicles.

  17. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    International Nuclear Information System (INIS)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho; Woo, Hyun Soo; Jo, Jae Min; Lee, Min Hee

    2015-01-01

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques

  18. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Woo, Hyun Soo [Dept. of Radiology, SMG-SNU Boramae Medical Center, Seoul (Korea, Republic of); Jo, Jae Min [Dept. of Computer Science and Engineering, Seoul National University, Seoul (Korea, Republic of); Lee, Min Hee [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of)

    2015-11-15

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques.

  19. ARK-2: a mobile robot that navigates autonomously in an industrial environment

    International Nuclear Information System (INIS)

    Bains, N.; Nickerson, S.; Wilkes, D.

    1995-01-01

    ARK-2 is a robot that uses a vision system based on a camera and spot laser rangefinder mounted on a pan and tilt unit for navigation. This vision system recognizes known landmarks and computes its position relative to them, thus bounding the error in its position. The vision system is also used to find known gauges, given their approximate locations, and takes readings from them. 'Approximate' in this context means the same sort of accuracy that a human would need: 'down aisle 3 on the right' suffices. ARK-2 is also equipped with the FAD (Floor Anomaly Detector) which is based on the NRC (National Research Council of Canada) BIRIS (Bi-IRIS) sensor, and keeps ARK-2 from failing into open drains or trying to negotiate large cables or pipes on the floor. ARK-2 has also been equipped with a variety of application sensors for security and safety patrol applications. Radiation sensors are used to produce contour maps of radiation levels. In order to detect fires, environmental changes and intruders, ARK-2 is equipped with smoke, temperature, humidity and gas sensors, scanning ultraviolet and infrared detectors and a microwave motion detector. In order to support autonomous, untethered operation for hours at a time, ARK-2 also has onboard systems for power, sonar-based obstacle detection, computation and communications. The project uses a UNIX environment for software development, with the onboard SPARC processor appearing as just another workstation on the LAN. Software modules include the hardware drivers, path planning, navigation, emergency stop, obstacle mapping and status monitoring. ARK-2 may also be controlled from a ROBCAD simulation. (author)

  20. Augmented environments for the targeting of hepatic lesions during image-guided robotic liver surgery.

    Science.gov (United States)

    Buchs, Nicolas C; Volonte, Francesco; Pugin, François; Toso, Christian; Fusaglia, Matteo; Gavaghan, Kate; Majno, Pietro E; Peterhans, Matthias; Weber, Stefan; Morel, Philippe

    2013-10-01

    Stereotactic navigation technology can enhance guidance during surgery and enable the precise reproduction of planned surgical strategies. Currently, specific systems (such as the CAS-One system) are available for instrument guidance in open liver surgery. This study aims to evaluate the implementation of such a system for the targeting of hepatic tumors during robotic liver surgery. Optical tracking references were attached to one of the robotic instruments and to the robotic endoscopic camera. After instrument and video calibration and patient-to-image registration, a virtual model of the tracked instrument and the available three-dimensional images of the liver were displayed directly within the robotic console, superimposed onto the endoscopic video image. An additional superimposed targeting viewer allowed for the visualization of the target tumor, relative to the tip of the instrument, for an assessment of the distance between the tumor and the tool for the realization of safe resection margins. Two cirrhotic patients underwent robotic navigated atypical hepatic resections for hepatocellular carcinoma. The augmented endoscopic view allowed for the definition of an accurate resection margin around the tumor. The overlay of reconstructed three-dimensional models was also used during parenchymal transection for the identification of vascular and biliary structures. Operative times were 240 min in the first case and 300 min in the second. There were no intraoperative complications. The da Vinci Surgical System provided an excellent platform for image-guided liver surgery with a stable optic and instrumentation. Robotic image guidance might improve the surgeon's orientation during the operation and increase accuracy in tumor resection. Further developments of this technological combination are needed to deal with organ deformation during surgery. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    Science.gov (United States)

    Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2017-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…

  2. Intraoperative navigation of an optically tracked surgical robot.

    Science.gov (United States)

    Cornellà, Jordi; Elle, Ole Jakob; Ali, Wajid; Samset, Eigil

    2008-01-01

    This paper presents an adaptive control scheme for improving the performance of a surgical robot when it executes tasks autonomously. A commercial tracking system is used to correlate the robot with the preoperative plan as well as to correct the position of the robot when errors between the real and planned positions are detected. Due to the noisy signals provided by the tracking system, a Kalman filter is proposed to smooth the variations and to increase the stability of the system. The efficiency of the approach has been validated using rigid and flexible endoscopic tools, obtaining in both cases that the target points can be reached with an error less than 1mm. These results make the approach suitable for a range of abdominal procedures, such as autonomous repositioning of endoscopic tools or probes for percutaneous procedures.

  3. Robotic assisted andrological surgery

    Science.gov (United States)

    Parekattil, Sijo J; Gudeloglu, Ahmet

    2013-01-01

    The introduction of the operative microscope for andrological surgery in the 1970s provided enhanced magnification and accuracy, unparalleled to any previous visual loop or magnification techniques. This technology revolutionized techniques for microsurgery in andrology. Today, we may be on the verge of a second such revolution by the incorporation of robotic assisted platforms for microsurgery in andrology. Robotic assisted microsurgery is being utilized to a greater degree in andrology and a number of other microsurgical fields, such as ophthalmology, hand surgery, plastics and reconstructive surgery. The potential advantages of robotic assisted platforms include elimination of tremor, improved stability, surgeon ergonomics, scalability of motion, multi-input visual interphases with up to three simultaneous visual views, enhanced magnification, and the ability to manipulate three surgical instruments and cameras simultaneously. This review paper begins with the historical development of robotic microsurgery. It then provides an in-depth presentation of the technique and outcomes of common robotic microsurgical andrological procedures, such as vasectomy reversal, subinguinal varicocelectomy, targeted spermatic cord denervation (for chronic orchialgia) and robotic assisted microsurgical testicular sperm extraction (microTESE). PMID:23241637

  4. Manipulation robot system based on visual guidance for sealing blocking plate of steam generator

    International Nuclear Information System (INIS)

    Duan Xingguang; Wang Yonggui; Li Meng; Kong Xiangzhan; Liu Qingsong

    2016-01-01

    To reduce labor intensity and irradiation exposure time inside the steam generator during the maintenance period of the nuclear power plant, a blocking plate manipulation robot system, including manipulation robot and pneumatic control console, is developed as an automatic remote-control tool to help staff to complete sealing steam generator primary pipes. The manipulation robot for fastening/loosening bolts utilizes visual guidance for target position, and the recognition algorithm is exerted to extract the bolt center coordinate values from image captured by camera in the procedure. The control strategy based on the position and current feedback is proposed for single bolt operation and whole bolts automatic operation. Meanwhile, the virtual interactive interface and remote monitoring are designed to improve the operability and safety. Finally, the relative experiments have verified the work effectiveness and the future work would be discussed. (author)

  5. Implementation of a map route analysis robot: combining an Android smart device and differential-drive robotic platform

    Directory of Open Access Journals (Sweden)

    Tseng Chi-Hung

    2017-01-01

    Full Text Available This paper proposes an easy-to-implement and relatively low-cost robotic platform with capability to realize image identification, object tracking, and Google Map route planning and navigation. Based on the JAVA and Bluetooth communication architectures, the system demonstrates the integration of Android smart devices and a differential-drive robotic platform.

  6. Quantifying the impact on navigation performance in visually impaired: Auditory information loss versus information gain enabled through electronic travel aids.

    Directory of Open Access Journals (Sweden)

    Alex Kreilinger

    Full Text Available This study's purpose was to analyze and quantify the impact of auditory information loss versus information gain provided by electronic travel aids (ETAs on navigation performance in people with low vision. Navigation performance of ten subjects (age: 54.9±11.2 years with visual acuities >1.0 LogMAR was assessed via the Graz Mobility Test (GMT. Subjects passed through a maze in three different modalities: 'Normal' with visual and auditory information available, 'Auditory Information Loss' with artificially reduced hearing (leaving only visual information, and 'ETA' with a vibrating ETA based on ultrasonic waves, thereby facilitating visual, auditory, and tactile information. Main performance measures comprised passage time and number of contacts. Additionally, head tracking was used to relate head movements to motion direction. When comparing 'Auditory Information Loss' to 'Normal', subjects needed significantly more time (p<0.001, made more contacts (p<0.001, had higher relative viewing angles (p = 0.002, and a higher percentage of orientation losses (p = 0.011. The only significant difference when comparing 'ETA' to 'Normal' was a reduced number of contacts (p<0.001. Our study provides objective, quantifiable measures of the impact of reduced hearing on the navigation performance in low vision subjects. Significant effects of 'Auditory Information Loss' were found for all measures; for example, passage time increased by 17.4%. These findings show that low vision subjects rely on auditory information for navigation. In contrast, the impact of the ETA was not significant but further analysis of head movements revealed two different coping strategies: half of the subjects used the ETA to increase speed, whereas the other half aimed at avoiding contacts.

  7. Autonomous mobile robot teams

    Science.gov (United States)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  8. Design and evaluation of a continuum robot with extendable balloons

    Directory of Open Access Journals (Sweden)

    E. Y. Yarbasi

    2018-02-01

    Full Text Available This article presents the design and preliminary evaluation of a novel continuum robot actuated by two extendable balloons. Extendable balloons are utilized as the actuation mechanism of the robot, and they are attached to the tip from their slack sections. These balloons can extend very much in length without having a significant change in diameter. Employing two balloons in an axially extendable, radially rigid flexible shaft, radial strain becomes constricted, allowing high elongation. As inflated, the balloons apply a force on the wall of the tip, pushing it forward. This force enables the robot to move forward. The air is supplied to the balloons by an air compressor and its flow rate to each balloon can be independently controlled. Changing the air volumes differently in each balloon, when they are radially constricted, orients the robot, allowing navigation. Elongation and force generation capabilities and pressure data are measured for different balloons during inflation and deflation. Afterward, the robot is subjected to open field and maze-like environment navigation tests. The contribution of this study is the introduction of a novel actuation mechanism for soft robots to have extreme elongation (2000 % in order to be navigated in substantially long and narrow environments.

  9. Teaching and implementing autonomous robotic lab walkthroughs in a biotech laboratory through model-based visual tracking

    Science.gov (United States)

    Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois

    2010-01-01

    After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.

  10. Image Mapping and Visual Attention on the Sensory Ego-Sphere

    Science.gov (United States)

    Fleming, Katherine Achim; Peters, Richard Alan, II

    2012-01-01

    The Sensory Ego-Sphere (SES) is a short-term memory for a robot in the form of an egocentric, tessellated, spherical, sensory-motor map of the robot s locale. Visual attention enables fast alignment of overlapping images without warping or position optimization, since an attentional point (AP) on the composite typically corresponds to one on each of the collocated regions in the images. Such alignment speeds analysis of the multiple images of the area. Compositing and attention were performed two ways and compared: (1) APs were computed directly on the composite and not on the full-resolution images until the time of retrieval; and (2) the attentional operator was applied to all incoming imagery. It was found that although the second method was slower, it produced consistent and, thereby, more useful APs. The SES is an integral part of a control system that will enable a robot to learn new behaviors based on its previous experiences, and that will enable it to recombine its known behaviors in such a way as to solve related, but novel, task problems with apparent creativity. The approach is to combine sensory-motor data association and dimensionality reduction to learn navigation and manipulation tasks as sequences of basic behaviors that can be implemented with a small set of closed-loop controllers. Over time, the aggregate of behaviors and their transition probabilities form a stochastic network. Then given a task, the robot finds a path in the network that leads from its current state to the goal. The SES provides a short-term memory for the cognitive functions of the robot, association of sensory and motor data via spatio-temporal coincidence, direction of the attention of the robot, navigation through spatial localization with respect to known or discovered landmarks, and structured data sharing between the robot and human team members, the individuals in multi-robot teams, or with a C3 center.

  11. Virtual modeling of robot-assisted manipulations in abdominal surgery.

    Science.gov (United States)

    Berelavichus, Stanislav V; Karmazanovsky, Grigory G; Shirokov, Vadim S; Kubyshkin, Valeriy A; Kriger, Andrey G; Kondratyev, Evgeny V; Zakharova, Olga P

    2012-06-27

    To determine the effectiveness of using multidetector computed tomography (MDCT) data in preoperative planning of robot-assisted surgery. Fourteen patients indicated for surgery underwent MDCT using 64 and 256-slice MDCT. Before the examination, a specially constructed navigation net was placed on the patient's anterior abdominal wall. Processing of MDCT data was performed on a Brilliance Workspace 4 (Philips). Virtual vectors that imitate robotic and assistant ports were placed on the anterior abdominal wall of the 3D model of the patient, considering the individual anatomy of the patient and the technical capabilities of robotic arms. Sites for location of the ports were directed by projection on the roentgen-positive tags of the navigation net. There were no complications observed during surgery or in the post-operative period. We were able to reduce robotic arm interference during surgery. The surgical area was optimal for robotic and assistant manipulators without any need for reinstallation of the trocars. This method allows modeling of the main steps in robot-assisted intervention, optimizing operation of the manipulator and lowering the risk of injuries to internal organs.

  12. A Study of Visual Descriptors for Outdoor Navigation Using Google Street View Images

    OpenAIRE

    Fernández, L.; Payá, L.; Reinoso, O.; Jiménez, L. M.; Ballesta, M.

    2016-01-01

    A comparative analysis between several methods to describe outdoor panoramic images is presented. The main objective consists in studying the performance of these methods in the localization process of a mobile robot (vehicle) in an outdoor environment, when a visual map that contains images acquired from different positions of the environment is available. With this aim, we make use of the database provided by Google Street View, which contains spherical panoramic images captured in urban en...

  13. Plenoptic Imager for Automated Surface Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous and semi-autonomous robotic systems require information about their surroundings in order to navigate properly. A video camera machine vision system can...

  14. Evolved Navigation Theory and Horizontal Visual Illusions

    Science.gov (United States)

    Jackson, Russell E.; Willey, Chela R.

    2011-01-01

    Environmental perception is prerequisite to most vertebrate behavior and its modern investigation initiated the founding of experimental psychology. Navigation costs may affect environmental perception, such as overestimating distances while encumbered (Solomon, 1949). However, little is known about how this occurs in real-world navigation or how…

  15. Bilateral human-robot control for semi-autonomous UAV navigation

    NARCIS (Netherlands)

    Wopereis, Han Willem; Fumagalli, Matteo; Stramigioli, Stefano; Carloni, Raffaella

    2015-01-01

    This paper proposes a semi-autonomous bilateral control architecture for unmanned aerial vehicles. During autonomous navigation, a human operator is allowed to assist the autonomous controller of the vehicle by actively changing its navigation parameters to assist it in critical situations, such as

  16. Mobile Robot Navigation and Obstacle Avoidance in Unstructured Outdoor Environments

    Science.gov (United States)

    2017-12-01

    to pull information from the network, it subscribes to a specific topic and is able to receive the messages that are published to that topic. In order...total artificial potential field is characterized “as the sum of an attractive potential pulling the robot toward the goal…and a repulsive potential...of robot laser_max = 20; % robot laser view horizon goaldist = 0.5; % distance metric for reaching goal goali = 1

  17. Explorer-II: Wireless Self-Powered Visual and NDE Robotic Inspection System for Live Gas Distribution Mains

    Energy Technology Data Exchange (ETDEWEB)

    Carnegie Mellon University

    2008-09-30

    Carnegie Mellon University (CMU) under contract from Department of Energy/National Energy Technology Laboratory (DoE/NETL) and co-funding from the Northeast Gas Association (NGA), has completed the overall system design, field-trial and Magnetic Flux Leakage (MFL) sensor evaluation program for the next-generation Explorer-II (X-II) live gas main Non-destructive Evaluation (NDE) and visual inspection robot platform. The design is based on the Explorer-I prototype which was built and field-tested under a prior (also DoE- and NGA co-funded) program, and served as the validation that self-powered robots under wireless control could access and navigate live natural gas distribution mains. The X-II system design ({approx}8 ft. and 66 lbs.) was heavily based on the X-I design, yet was substantially expanded to allow the addition of NDE sensor systems (while retaining its visual inspection capability), making it a modular system, and expanding its ability to operate at pressures up to 750 psig (high-pressure and unpiggable steel-pipe distribution mains). A new electronics architecture and on-board software kernel were added to again improve system performance. A locating sonde system was integrated to allow for absolute position-referencing during inspection (coupled with external differential GPS) and emergency-locating. The power system was upgraded to utilize lithium-based battery-cells for an increase in mission-time. The resulting robot-train system with CAD renderings of the individual modules. The system architecture now relies on a dual set of end camera-modules to house the 32-bit processors (Single-Board Computer or SBC) as well as the imaging and wireless (off-board) and CAN-based (on-board) communication hardware and software systems (as well as the sonde-coil and -electronics). The drive-module (2 ea.) are still responsible for bracing (and centering) to drive in push/pull fashion the robot train into and through the pipes and obstacles. The steering modules

  18. Spatial models for context-aware indoor navigation systems: A survey

    Directory of Open Access Journals (Sweden)

    Imad Afyouni

    2012-06-01

    Full Text Available This paper surveys indoor spatial models developed for research fields ranging from mobile robot mapping, to indoor location-based services (LBS, and most recently to context-aware navigation services applied to indoor environments. Over the past few years, several studies have evaluated the potential of spatial models for robot navigation and ubiquitous computing. In this paper we take a slightly different perspective, considering not only the underlying properties of those spatial models, but also to which degree the notion of context can be taken into account when delivering services in indoor environments. Some preliminary recommendations for the development of indoor spatial models are introduced from a context-aware perspective. A taxonomy of models is then presented and assessed with the aim of providing a flexible spatial data model for navigation purposes, and by taking into account the context dimensions.

  19. Spatial and Temporal Abstractions in POMDPs Applied to Robot Navigation

    National Research Council Canada - National Science Library

    Theocharous, Georgios; Mahadevan, Sridhar; Kaelbling, Leslie P

    2005-01-01

    Partially observable Markov decision processes (POMDPs) are a well studied paradigm for programming autonomous robots, where the robot sequentially chooses actions to achieve long term goals efficiently...

  20. ROBOTICS AND VIRTUAL REALITY FOR CULTURAL HERITAGE DIGITIZATION AND FRUITION

    Directory of Open Access Journals (Sweden)

    D. Calisi

    2017-05-01

    Full Text Available In this paper we present our novel approach for acquiring and managing digital models of archaeological sites, and the visualization techniques used to showcase them. In particular, we will demonstrate two technologies: our robotic system for digitization of archaeological sites (DigiRo result of over three years of efforts by a group of cultural heritage experts, computer scientists and roboticists, and our cloud-based archaeological information system (ARIS. Finally we describe the viewers we developed to inspect and navigate the 3D models: a viewer for the web (ROVINA Web Viewer and an immersive viewer for Virtual Reality (ROVINA VR Viewer.

  1. State-of-the-Art Mobile Intelligence: Enabling Robots to Move Like Humans by Estimating Mobility with Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Xue-Bo Jin

    2018-03-01

    Full Text Available Mobility is a significant robotic task. It is the most important function when robotics is applied to domains such as autonomous cars, home service robots, and autonomous underwater vehicles. Despite extensive research on this topic, robots still suffer from difficulties when moving in complex environments, especially in practical applications. Therefore, the ability to have enough intelligence while moving is a key issue for the success of robots. Researchers have proposed a variety of methods and algorithms, including navigation and tracking. To help readers swiftly understand the recent advances in methodology and algorithms for robot movement, we present this survey, which provides a detailed review of the existing methods of navigation and tracking. In particular, this survey features a relation-based architecture that enables readers to easily grasp the key points of mobile intelligence. We first outline the key problems in robot systems and point out the relationship among robotics, navigation, and tracking. We then illustrate navigation using different sensors and the fusion methods and detail the state estimation and tracking models for target maneuvering. Finally, we address several issues of deep learning as well as the mobile intelligence of robots as suggested future research topics. The contributions of this survey are threefold. First, we review the literature of navigation according to the applied sensors and fusion method. Second, we detail the models for target maneuvering and the existing tracking based on estimation, such as the Kalman filter and its series developed form, according to their model-construction mechanisms: linear, nonlinear, and non-Gaussian white noise. Third, we illustrate the artificial intelligence approach—especially deep learning methods—and discuss its combination with the estimation method.

  2. Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

    Directory of Open Access Journals (Sweden)

    Miguel A. Trujano

    2012-10-01

    Full Text Available This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin-Krassovsky-LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed-loop performance.

  3. Technological advances in robotic-assisted laparoscopic surgery.

    Science.gov (United States)

    Tan, Gerald Y; Goel, Raj K; Kaouk, Jihad H; Tewari, Ashutosh K

    2009-05-01

    In this article, the authors describe the evolution of urologic robotic systems and the current state-of-the-art features and existing limitations of the da Vinci S HD System (Intuitive Surgical, Inc.). They then review promising innovations in scaling down the footprint of robotic platforms, the early experience with mobile miniaturized in vivo robots, advances in endoscopic navigation systems using augmented reality technologies and tracking devices, the emergence of technologies for robotic natural orifice transluminal endoscopic surgery and single-port surgery, advances in flexible robotics and haptics, the development of new virtual reality simulator training platforms compatible with the existing da Vinci system, and recent experiences with remote robotic surgery and telestration.

  4. Robot for Investigations and Assessments of Nuclear Areas

    Energy Technology Data Exchange (ETDEWEB)

    Kanaan, Daniel; Dogny, Stephane [AREVA D and S/DT, 30206 Bagnols sur Ceze (France)

    2015-07-01

    RIANA is a remote controlled Robot dedicated for Investigations and Assessments of Nuclear Areas. The development of RIANA is motivated by the need to have at disposal a proven robot, tested in hot cells; a robot capable of remotely investigate and characterise the inside of nuclear facilities in order to collect efficiently all the required data in the shortest possible time. It is based on a wireless medium sized remote carrier that may carry a wide variety of interchangeable modules, sensors and tools. It is easily customised to match specific requirements and quickly configured depending on the mission and the operator's preferences. RIANA integrates localisation and navigation systems. The robot will be able to generate / update a 2D map of its surrounding and exploring areas. The position of the robot is given accurately on the map. Furthermore, the robot will be able to autonomously calculate, define and follow a trajectory between 2 points taking into account its environment and obstacles. The robot is configurable to manage obstacles and restrict access to forbidden areas. RIANA allows an advanced control of modules, sensors and tools; all collected data (radiological and measured data) are displayed in real time in different format (chart, on the generated map...) and stored in a single place so that may be exported in a convenient format for data processing. This modular design gives RIANA the flexibility to perform multiple investigation missions where humans cannot work such as: visual inspections, dynamic localization and 2D mapping, characterizations and nuclear measurements of floor and walls, non destructive testing, samples collection: solid and liquid. The benefits of using RIANA are: - reducing the personnel exposures by limiting the manual intervention time, - minimizing the time and reducing the cost of investigation operations, - providing critical inputs to set up and optimize cleanup and dismantling operations. (authors)

  5. Robot for Investigations and Assessments of Nuclear Areas

    International Nuclear Information System (INIS)

    Kanaan, Daniel; Dogny, Stephane

    2015-01-01

    RIANA is a remote controlled Robot dedicated for Investigations and Assessments of Nuclear Areas. The development of RIANA is motivated by the need to have at disposal a proven robot, tested in hot cells; a robot capable of remotely investigate and characterise the inside of nuclear facilities in order to collect efficiently all the required data in the shortest possible time. It is based on a wireless medium sized remote carrier that may carry a wide variety of interchangeable modules, sensors and tools. It is easily customised to match specific requirements and quickly configured depending on the mission and the operator's preferences. RIANA integrates localisation and navigation systems. The robot will be able to generate / update a 2D map of its surrounding and exploring areas. The position of the robot is given accurately on the map. Furthermore, the robot will be able to autonomously calculate, define and follow a trajectory between 2 points taking into account its environment and obstacles. The robot is configurable to manage obstacles and restrict access to forbidden areas. RIANA allows an advanced control of modules, sensors and tools; all collected data (radiological and measured data) are displayed in real time in different format (chart, on the generated map...) and stored in a single place so that may be exported in a convenient format for data processing. This modular design gives RIANA the flexibility to perform multiple investigation missions where humans cannot work such as: visual inspections, dynamic localization and 2D mapping, characterizations and nuclear measurements of floor and walls, non destructive testing, samples collection: solid and liquid. The benefits of using RIANA are: - reducing the personnel exposures by limiting the manual intervention time, - minimizing the time and reducing the cost of investigation operations, - providing critical inputs to set up and optimize cleanup and dismantling operations. (authors)

  6. Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.

    Science.gov (United States)

    Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M

    2015-01-01

    This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.

  7. Open core control software for surgical robots.

    Science.gov (United States)

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several

  8. Passive mapping and intermittent exploration for mobile robots

    Science.gov (United States)

    Engleson, Sean P.

    1994-01-01

    An adaptive state space architecture is combined with diktiometric representation to provide the framework for designing a robot mapping system with flexible navigation planning tasks. This involves indexing waypoints described as expectations, geometric indexing, and perceptual indexing. Matching and updating the robot's projected position and sensory inputs with indexing waypoints involves matchers, dynamic priorities, transients, and waypoint restructuring. The robot's map learning can be opganized around the principles of passive mapping.

  9. Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition

    Directory of Open Access Journals (Sweden)

    Chunfu Wu

    2015-01-01

    Full Text Available For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOF motion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification. The experimental results show that the proposed fast homography decomposition method is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.

  10. Prototype Robot Pemadam Api Beroda Menggunakan Teknik Navigasi Wall Follower

    OpenAIRE

    Safrianti, Ery; Amri, Rahyul; Budiman, Septian

    2012-01-01

    Fire Robot serves to detect and extinguish the fire. The robot is controlled by the microcontroller ATMEGA8535 automatically. This robot contains of several sensors, such as 5 sets of ping parallax as a robot navigator, a set UVTron equipped with fire-detecting driver, DC motor driver L298 with two DC servo motors. The robot was developed from a prototype that has been studied previously with the addition on the hardware side of the sound activation and two sets of line detector. The robot wi...

  11. On detection and automatic tracking of butt weld line in thin wall pipe welding by a mobile robot with visual sensor

    International Nuclear Information System (INIS)

    Suga, Yasuo; Ishii, Hideaki; Muto, Akifumi

    1992-01-01

    An automatic pipe welding mobile robot system with visual sensor was constructed. The robot can move along a pipe, and detect the weld line to be welded by visual sensor. Moreover, in order to make an automatic welding, the welding torch can track the butt weld line of the pipes at a constant speed by rotating the robot head. Main results obtained are summarized as follows: 1) Using a proper lighting fixed in front of the CCD camera, the butt weld line of thin wall pipes can be recongnized stably. In this case, the root gap should be approximately 0.5 mm. 2) In order to detect the weld line stably during moving along the pipe, a brightness distribution measured by the CCD camera should be subjected to smoothing and differentiating and then the weld line is judged by the maximum and minimum values of the differentials. 3) By means of the basic robot system with a visual sensor controlled by a personal computer, the detection and in-process automatic tracking of a weld line are possible. The average tracking error was approximately 0.2 mm and maximum error 0.5 mm and the welding speed was held at a constant value with error of about 0.1 cm/min. (author)

  12. Automatic Operation For A Robot Lawn Mower

    Science.gov (United States)

    Huang, Y. Y.; Cao, Z. L.; Oh, S. J.; Kattan, E. U.; Hall, E. L.

    1987-02-01

    A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.

  13. The development of advanced robotics for the nuclear industry -The development of advanced robotic technology-

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Min; Lee, Yong Bum; Park, Soon Yong; Cho, Jae Wan; Lee, Nam Hoh; Kim, Woong Kee; Moon, Byung Soo; Kim, Seung Hoh; Kim, Chang Heui; Kim, Byung Soo; Hwang, Suk Yong; Lee, Yung Kwang; Moon, Je Sun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Main activity in this year is to develop both remote handling system and telepresence techniques, which can be used for people involved in extremely hazardous working area to alleviate their burden. In the robot vision technology part, KAERI-PSM system, stereo imaging camera module, stereo BOOM/MOLLY unit, and stereo HMD unit are developed. Also, autostereo TV system which falls under the category of next generation stereo imaging technology has been studied. The performance of KAERI-PSM system for remote handling task is evaluated and compared with other stereo imaging systems as well as general TV imaging system. The result shows that KAERI-PSM system is superior to the other stereo imaging systems about remote operation speedup and accuracy. The automatic recognition algorithm of instrument panel is studied and passive visual target tracking system is developed. The 5 DOF camera serving unit has been designed and fabricated. It is designed to function like human`s eye. In the sensing and intelligent control research part, thermal image database system for thermal image analysis is developed and remote temperature monitoring technique using fiber optics is investigated. And also, two dimensional radioactivity sensor head for radiation profile monitoring system is designed. In the part of intelligent robotics, mobile robot is fabricated and its autonomous navigation using fuzzy control logic is studied. These remote handling and telepresence techniques developed in this project can be applied to nozzle-dam installation/removal robot system, reactor inspection unit, underwater nuclear pellet inspection and pipe abnormality inspection. And these developed remote handling and telepresence techniques will be applied in general industry, medical science, and military as well as nuclear facilities. 203 figs, 12 tabs, 72 refs. (Author).

  14. ARIES: A mobile robot inspector

    International Nuclear Information System (INIS)

    Byrd, J.S.

    1995-01-01

    ARIES (Autonomous Robotic Inspection Experimental System) is a mobile robot inspection system being developed for the Department of Energy (DOE) to survey and inspect drums containing mixed and low-level radioactive waste stored in warehouses at DOE facilities. The drums are typically stacked four high and arranged in rows with three-foot aisle widths. The robot will navigate through the aisles and perform an autonomous inspection operation, typically performed by a human operator. It will make real-time decisions about the condition of the drums, maintain a database of pertinent information about each drum, and generate reports

  15. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  16. Tema 2: Open Roberta - A Web Based Approach to Visually Program Real Educational Robots

    Directory of Open Access Journals (Sweden)

    Markus Ketterl

    2016-01-01

    Full Text Available The aim of the Open Roberta initiative is to support visual online programming of educational robots. The goal is to overcome technical and professional barriers for teachers and students alike at home or in the classrooms. The free to use cloud-based Open Roberta Lab consists of graphical programming tools for the browser that enable beginners to seamlessly start coding without long-winded system installations, setups or additional technology getting in the way. Open Roberta is a project within the Fraunhofer initiative ”Roberta - Learning with Robots”. A further aspect of the paper is the introduction of the NEPOR meta programming language as a core concept for coupling real educational robot systems.

  17. Tema 2: Open Roberta - A Web Based Approach to Visually Program Real Educational Robots

    Directory of Open Access Journals (Sweden)

    Markus Ketterl

    2015-12-01

    Full Text Available The aim of the Open Roberta initiative is to support visual online programming of educational robots. The goal is to overcome technical and professional barriers for teachers and students alike at home or in the classrooms. The free to use cloud-based Open Roberta Lab consists of graphical programming tools for the browser that enable beginners to seamlessly start coding without long-winded system installations, setups or additional technology getting in the way. Open Roberta is a project within the Fraunhofer initiative ”Roberta - Learning with Robots”. A further aspect of the paper is the introduction of the NEPOR meta programming language as a core concept for coupling real educational robot systems.

  18. Image-based particle filtering for navigation in a semi-structured agricultural environment

    NARCIS (Netherlands)

    Hiremath, S.; van Evert, F.K.; ter Braak, C.J.F.; Stein, A.; van der Heijden, G.

    2014-01-01

    Autonomous navigation of field robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. The drawback of existing systems is the lack of robustness to these uncertainties. In this study we propose a vision-based navigation method to address these

  19. Intelligent Robot-assisted Humanitarian Search and Rescue System

    Directory of Open Access Journals (Sweden)

    Henry Y. K. Lau

    2009-11-01

    Full Text Available The unprecedented scale and number of natural and man-made disasters in the past decade has urged international emergency search and rescue communities to seek for novel technology to enhance operation efficiency. Tele-operated search and rescue robots that can navigate deep into rubble to search for victims and to transfer critical field data back to the control console has gained much interest among emergency response institutions. In response to this need, a low-cost autonomous mini robot equipped with thermal sensor, accelerometer, sonar, pin-hole camera, microphone, ultra-bright LED and wireless communication module is developed to study the control of a group of decentralized mini search and rescue robots. The robot can navigate autonomously between voids to look for living body heat and can send back audio and video information to allow the operator to determine if the found object is a living human. This paper introduces the design and control of a low-cost robotic search and rescue system based on an immuno control framework developed for controlling decentralized systems. Design and development of the physical prototype and the immunity-based control system are described in this paper.

  20. Intelligent Robot-Assisted Humanitarian Search and Rescue System

    Directory of Open Access Journals (Sweden)

    Albert W. Y. Ko

    2009-06-01

    Full Text Available The unprecedented scale and number of natural and man-made disasters in the past decade has urged international emergency search and rescue communities to seek for novel technology to enhance operation efficiency. Tele-operated search and rescue robots that can navigate deep into rubble to search for victims and to transfer critical field data back to the control console has gained much interest among emergency response institutions. In response to this need, a low-cost autonomous mini robot equipped with thermal sensor, accelerometer, sonar, pin-hole camera, microphone, ultra-bright LED and wireless communication module is developed to study the control of a group of decentralized mini search and rescue robots. The robot can navigate autonomously between voids to look for living body heat and can send back audio and video information to allow the operator to determine if the found object is a living human. This paper introduces the design and control of a low-cost robotic search and rescue system based on an immuno control framework developed for controlling decentralized systems. Design and development of the physical prototype and the immunity-based control system are described in this paper.

  1. Mobile-robot navigation with complete coverage of unstructured environments

    OpenAIRE

    García Armada, Elena; González de Santos, Pablo

    2004-01-01

    There are some mobile-robot applications that require the complete coverage of an unstructured environment. Examples are humanitarian de-mining and floor-cleaning tasks. A complete-coverage algorithm is then used, a path-planning technique that allows the robot to pass over all points in the environment, avoiding unknown obstacles. Different coverage algorithms exist, but they fail working in unstructured environments. This paper details a complete-coverage algorithm for unstructured environm...

  2. Current status of endovascular catheter robotics.

    Science.gov (United States)

    Lumsden, Alan B; Bismuth, Jean

    2018-06-01

    In this review, we will detail the evolution of endovascular therapy as the basis for the development of catheter-based robotics. In parallel, we will outline the evolution of robotics in the surgical space and how the convergence of technology and the entrepreneurs who push this evolution have led to the development of endovascular robots. The current state-of-the-art and future directions and potential are summarized for the reader. Information in this review has been drawn primarily from our personal clinical and preclinical experience in use of catheter robotics, coupled with some ground-breaking work reported from a few other major centers who have embraced the technology's capabilities and opportunities. Several case studies demonstrating the unique capabilities of a precisely controlled catheter are presented. Most of the preclinical work was performed in the advanced imaging and navigation laboratory. In this unique facility, the interface of advanced imaging techniques and robotic guidance is being explored. Although this procedure employs a very high-tech approach to navigation inside the endovascular space, we have conveyed the kind of opportunities that this technology affords to integrate 3D imaging and 3D control. Further, we present the opportunity of semi-autonomous motion of these devices to a target. For the interventionist, enhanced precision can be achieved in a nearly radiation-free environment.

  3. Robots, systems, and methods for hazard evaluation and visualization

    Science.gov (United States)

    Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.; Hartley, Robert S.; Gertman, David I.; Kinoshita, Robert A.; Whetten, Jonathan

    2013-01-15

    A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximate the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.

  4. Navigation Method for Autonomous Robots in a Dynamic Indoor Environment

    Czech Academy of Sciences Publication Activity Database

    Věchet, Stanislav; Chen, K.-S.; Krejsa, Jiří

    2013-01-01

    Roč. 3, č. 4 (2013), s. 273-277 ISSN 2223-9766 Institutional support: RVO:61388998 Keywords : particle filters * autonomous mobile robots * mixed potential fields Subject RIV: JD - Computer Applications, Robotics http://www.ausmt.org/index.php/AUSMT/article/view/214/239

  5. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.

  6. Integration of Kinect and Low-Cost Gnss for Outdoor Navigation

    Science.gov (United States)

    Pagliaria, D.; Pinto, L.; Reguzzoni, M.; Rossi, L.

    2016-06-01

    Since its launch on the market, Microsoft Kinect sensor has represented a great revolution in the field of low cost navigation, especially for indoor robotic applications. In fact, this system is endowed with a depth camera, as well as a visual RGB camera, at a cost of about 200. The characteristics and the potentiality of the Kinect sensor have been widely studied for indoor applications. The second generation of this sensor has been announced to be capable of acquiring data even outdoors, under direct sunlight. The task of navigating passing from an indoor to an outdoor environment (and vice versa) is very demanding because the sensors that work properly in one environment are typically unsuitable in the other one. In this sense the Kinect could represent an interesting device allowing bridging the navigation solution between outdoor and indoor. In this work the accuracy and the field of application of the new generation of Kinect sensor have been tested outdoor, considering different lighting conditions and the reflective properties of the emitted ray on different materials. Moreover, an integrated system with a low cost GNSS receiver has been studied, with the aim of taking advantage of the GNSS positioning when the satellite visibility conditions are good enough. A kinematic test has been performed outdoor by using a Kinect sensor and a GNSS receiver and it is here presented.

  7. Visual servoing in medical robotics: a survey. Part II: tomographic imaging modalities--techniques and applications.

    Science.gov (United States)

    Azizian, Mahdi; Najmaei, Nima; Khoshnam, Mahta; Patel, Rajni

    2015-03-01

    Intraoperative application of tomographic imaging techniques provides a means of visual servoing for objects beneath the surface of organs. The focus of this survey is on therapeutic and diagnostic medical applications where tomographic imaging is used in visual servoing. To this end, a comprehensive search of the electronic databases was completed for the period 2000-2013. Existing techniques and products are categorized and studied, based on the imaging modality and their medical applications. This part complements Part I of the survey, which covers visual servoing techniques using endoscopic imaging and direct vision. The main challenges in using visual servoing based on tomographic images have been identified. 'Supervised automation of medical robotics' is found to be a major trend in this field and ultrasound is the most commonly used tomographic modality for visual servoing. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    Science.gov (United States)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  9. Autonomous Rule Based Robot Navigation In Orchards

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Ravn, Ole; Andersen, Nils Axel

    2010-01-01

    Orchard navigation using sensor-based localization and exible mission management facilitates successful missions independent of the Global Positioning System (GPS). This is especially important while driving between tight tree rows where the GPS coverage is poor. This paper suggests localization ...

  10. 14 CFR 121.349 - Communication and navigation equipment for operations under VFR over routes not navigated by...

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Communication and navigation equipment for... § 121.349 Communication and navigation equipment for operations under VFR over routes not navigated by... receiver providing visual and aural signals; and (iii) One ILS receiver; and (3) Any RNAV system used to...

  11. Review of surgical robotics user interface: what is the best way to control robotic surgery?

    Science.gov (United States)

    Simorov, Anton; Otte, R Stephen; Kopietz, Courtni M; Oleynikov, Dmitry

    2012-08-01

    As surgical robots begin to occupy a larger place in operating rooms around the world, continued innovation is necessary to improve our outcomes. A comprehensive review of current surgical robotic user interfaces was performed to describe the modern surgical platforms, identify the benefits, and address the issues of feedback and limitations of visualization. Most robots currently used in surgery employ a master/slave relationship, with the surgeon seated at a work-console, manipulating the master system and visualizing the operation on a video screen. Although enormous strides have been made to advance current technology to the point of clinical use, limitations still exist. A lack of haptic feedback to the surgeon and the inability of the surgeon to be stationed at the operating table are the most notable examples. The future of robotic surgery sees a marked increase in the visualization technologies used in the operating room, as well as in the robots' abilities to convey haptic feedback to the surgeon. This will allow unparalleled sensation for the surgeon and almost eliminate inadvertent tissue contact and injury. A novel design for a user interface will allow the surgeon to have access to the patient bedside, remaining sterile throughout the procedure, employ a head-mounted three-dimensional visualization system, and allow the most intuitive master manipulation of the slave robot to date.

  12. Vibrotactile in-vehicle navigation system

    NARCIS (Netherlands)

    Erp, J.B.F. van; Veen, H.J. van

    2004-01-01

    A vibrotactile display, consisting ofeight vibrating elements or tactors mounted in a driver's seat, was tested in a driving simulator. Participants drove with visual, tactile and multimodal navigation displays through a built-up area. Workload and the reaction time to navigation messages were

  13. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  14. Redundant Sensors for Mobile Robot Navigation

    Science.gov (United States)

    1985-09-01

    represent a probability that the area is empty, while positive numbers mcan it’s probably occupied. Zero reprtsents the unknown. The basic idea is that...room to give it absolute positioning information. This works by using two infrared emitters and detectors on the robot. Measurements of anglcs are made...meters (T in Kelvin) 273 sec Distances returned when assuming 80 degrees Farenheit , but where. actual temperature is 60 degrees, will be seven inches

  15. Practical Stabilization of Uncertain Nonholonomic Mobile Robots Based on Visual Servoing Model with Uncalibrated Camera Parameters

    Directory of Open Access Journals (Sweden)

    Hua Chen

    2013-01-01

    Full Text Available The practical stabilization problem is addressed for a class of uncertain nonholonomic mobile robots with uncalibrated visual parameters. Based on the visual servoing kinematic model, a new switching controller is presented in the presence of parametric uncertainties associated with the camera system. In comparison with existing methods, the new design method is directly used to control the original system without any state or input transformation, which is effective to avoid singularity. Under the proposed control law, it is rigorously proved that all the states of closed-loop system can be stabilized to a prescribed arbitrarily small neighborhood of the zero equilibrium point. Furthermore, this switching control technique can be applied to solve the practical stabilization problem of a kind of mobile robots with uncertain parameters (and angle measurement disturbance which appeared in some literatures such as Morin et al. (1998, Hespanha et al. (1999, Jiang (2000, and Hong et al. (2005. Finally, the simulation results show the effectiveness of the proposed controller design approach.

  16. Forward Models Applied in Visual Servoing for a Reaching Task in the iCub Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2009-01-01

    Full Text Available This paper details the application of a forward model to improve a reaching task. The reaching task must be accomplished by a humanoid robot with 53 degrees of freedom (d.o.f. and a stereo-vision system. We have explored via simulations a new way of constructing and utilizing a forward model that encodes eye–hand relationships. We constructed a forward model using the data obtained from only a single reaching attempt. ANFIS neural networks are used to construct the forward model, but the forward model is updated online with new information that comes from each reaching attempt. Using the obtained forward model, an initial image Jacobian is estimated and is used with a visual servoing controller. Simulation results demonstrate that errors are lower when the initial image Jacobian is derived from the forward model. This paper is one of the few attempts at applying visual servoing in a complete humanoid robot.

  17. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    Science.gov (United States)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  18. HexaMob—A Hybrid Modular Robotic Design for Implementing Biomimetic Structures

    Directory of Open Access Journals (Sweden)

    Sasanka Sankhar Reddy CH.

    2017-10-01

    Full Text Available Modular robots are capable of forming primitive shapes such as lattice and chain structures with the additional flexibility of distributed sensing. The biomimetic structures developed using such modular units provides ease of replacement and reconfiguration in co-ordinated structures, transportation etc. in real life scenarios. Though the research in the employment of modular robotic units in formation of biological organisms is in the nascent stage, modular robotic units are already capable of forming such sophisticated structures. The modular robotic designs proposed so far in modular robotics research vary significantly in external structures, sensor-actuator mechanisms interfaces for docking and undocking, techniques for providing mobility, coordinated structures, locomotions etc. and each robotic design attempted to address various challenges faced in the domain of modular robotics by employing different strategies. This paper presents a novel modular wheeled robotic design - HexaMob facilitating four degrees of freedom (2 degrees for mobility and 2 degrees for structural reconfiguration on a single module with minimal usage of sensor-actuator assemblies. The crucial features of modular robotics such as back-driving restriction, docking, and navigation are addressed in the process of HexaMob design. The proposed docking mechanism is enabled using vision sensor, enhancing the capabilities in docking as well as navigation in co-ordinated structures such as humanoid robots.

  19. Robot-assisted general surgery.

    Science.gov (United States)

    Hazey, Jeffrey W; Melvin, W Scott

    2004-06-01

    With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.

  20. SOVEREIGN: An autonomous neural system for incrementally learning planned action sequences to navigate towards a rewarded goal.

    Science.gov (United States)

    Gnadt, William; Grossberg, Stephen

    2008-06-01

    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and size-invariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory

  1. Control of free-flying space robot manipulator systems

    Science.gov (United States)

    Cannon, Robert H., Jr.

    1990-01-01

    New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail.

  2. Fuzzy Logic Supervised Teleoperation Control for Mobile Robot

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The supervised teleoperation control is presented for a mobile robot to implement the tasks by using fuzzy logic. The teleoperation control system includes joystick based user interaction mechanism, the high level instruction set and fuzzy logic behaviors integrated in a supervised autonomy teleoperation control system for indoor navigation. These behaviors include left wall following, right wall following, turn left, turn right, left obstacle avoidance, right obstacle avoidance and corridor following based on ultrasonic range finders data. The robot compares the instructive high level command from the operator and relays back a suggestive signal back to the operator in case of mismatch between environment and instructive command. This strategy relieves the operator's cognitive burden, handle unforeseen situations and uncertainties of environment autonomously. The effectiveness of the proposed method for navigation in an unstructured environment is verified by experiments conducted on a mobile robot equipped with only ultrasonic range finders for environment sensing.

  3. Concurrent Unimodal Learning Enhances Multisensory Responses of Bi-Directional Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best...

  4. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  5. The development of advanced robotics for the nuclear industry -The development of advanced robotic technology-

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Bum; Park, Soon Yong; Cho, Jae Wan; Lee, Nam Hoh; Kim, Woong Kee; Moon, Byung Soo; Kim, Seung Hoh; Kim, Chang Heui; Kim, Byung Soo; Hwang, Suk Yong; Lee, Yung Kwang; Moon, Je Sun

    1995-07-01

    Main activity in this year is to develop both remote handling system and telepresence techniques, which can be used for people involved in extremely hazardous working area to alleviate their burden. In the robot vision technology part, KAERI-PSM system, stereo imaging camera module, stereo BOOM/MOLLY unit, and stereo HMD unit are developed. Also, autostereo TV system which falls under the category of next generation stereo imaging technology has been studied. The performance of KAERI-PSM system for remote handling task is evaluated and compared with other stereo imaging systems as well as general TV imaging system. The result shows that KAERI-PSM system is superior to the other stereo imaging systems about remote operation speedup and accuracy. The automatic recognition algorithm of instrument panel is studied and passive visual target tracking system is developed. The 5 DOF camera serving unit has been designed and fabricated. It is designed to function like human's eye. In the sensing and intelligent control research part, thermal image database system for thermal image analysis is developed and remote temperature monitoring technique using fiber optics is investigated. And also, two dimensional radioactivity sensor head for radiation profile monitoring system is designed. In the part of intelligent robotics, mobile robot is fabricated and its autonomous navigation using fuzzy control logic is studied. These remote handling and telepresence techniques developed in this project can be applied to nozzle-dam installation/removal robot system, reactor inspection unit, underwater nuclear pellet inspection and pipe abnormality inspection. And these developed remote handling and telepresence techniques will be applied in general industry, medical science, and military as well as nuclear facilities. It has been looking for these techniques to expand the working area of human, raise the working efficiencies of remote task to the highest degree, and enhance the industrial

  6. Navigation through unknown and dynamic open spaces using topological notions

    Science.gov (United States)

    Miguel-Tomé, Sergio

    2018-04-01

    Until now, most algorithms used for navigation have had the purpose of directing system towards one point in space. However, humans communicate tasks by specifying spatial relations among elements or places. In addition, the environments in which humans develop their activities are extremely dynamic. The only option that allows for successful navigation in dynamic and unknown environments is making real-time decisions. Therefore, robots capable of collaborating closely with human beings must be able to make decisions based on the local information registered by the sensors and interpret and express spatial relations. Furthermore, when one person is asked to perform a task in an environment, this task is communicated given a category of goals so the person does not need to be supervised. Thus, two problems appear when one wants to create multifunctional robots: how to navigate in dynamic and unknown environments using spatial relations and how to accomplish this without supervision. In this article, a new architecture to address the two cited problems is presented, called the topological qualitative navigation architecture. In previous works, a qualitative heuristic called the heuristic of topological qualitative semantics (HTQS) has been developed to establish and identify spatial relations. However, that heuristic only allows for establishing one spatial relation with a specific object. In contrast, navigation requires a temporal sequence of goals with different objects. The new architecture attains continuous generation of goals and resolves them using HTQS. Thus, the new architecture achieves autonomous navigation in dynamic or unknown open environments.

  7. Orchard navigation using derivative free Kalman filtering

    DEFF Research Database (Denmark)

    Hansen, Søren; Bayramoglu, Enis; Andersen, Jens Christian

    2011-01-01

    This paper describes the use of derivative free filters for mobile robot localization and navigation in an orchard. The localization algorithm fuses odometry and gyro measurements with line features representing the surrounding fruit trees of the orchard. The line features are created on basis of 2...

  8. CSIR Centre for Mining Innovation and the mine safety platform robot

    CSIR Research Space (South Africa)

    Green, JJ

    2012-11-01

    Full Text Available The Council for Scientific and Industrial Research (CSIR) in South Africa is currently developing a robot for the inspection of the ceiling (hanging wall) in an underground gold mine. The robot autonomously navigates the 30 meter long by 3 meter...

  9. Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development.

    Science.gov (United States)

    Bostelman, Roger; Hong, Tsai; Legowik, Steven

    2016-01-01

    Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems.

  10. Low computation vision-based navigation for a Martian rover

    Science.gov (United States)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  11. JACoW A dual arms robotic platform control for navigation, inspection and telemanipulation

    CERN Document Server

    Di Castro, Mario; Ferre, Manuel; Gilardoni, Simone; Losito, Roberto; Lunghi, Giacomo; Masi, Alessandro

    2018-01-01

    High intensity hadron colliders and fixed target experiments require an increasing amount of robotic tele-manipulation to prevent excessive exposure of maintenance personnel to the radioactive environment. Telemanipulation tasks are often required on old radioactive devices not conceived to be maintained and handled using standard industrial robotic solutions. Robotic platforms with a level of dexterity that often require the use of two robotic arms with a minimum of six degrees of freedom are instead needed for these purposes. In this paper, the control of a novel robust robotic platform able to host and to carry safely a dual robotic arm system is presented. The control of the arms is fully integrated with the vehicle control in order to guarantee simplicity to the operators during the realization of the robotic tasks. A novel high-level control architecture for the new robot is shown, as well as a novel low level safety layer for anti-collision and recovery scenarios. Preliminary results of the system comm...

  12. University of Michigan workscope for 1991 DOE University program in robotics for advanced reactors

    International Nuclear Information System (INIS)

    Wehe, D.K.

    1990-01-01

    The University of Michigan (UM) is a member of a team of researchers, including the universities of Florida, Texas, and Tennessee, along with Oak Ridge National Laboratory, developing robotic for hazardous environments. The goal of this research is to develop the intelligent and capable robots which can perform useful functions in the new generation of nuclear reactors currently under development. By augmenting human capabilities through remote robotics, increased safety, functionality, and reliability can be achieved. In accordance with the established lines of research responsibilities, our primary efforts during 1991 will continue to focus on the following areas: radiation imaging; mobile robot navigation; three-dimensional vision capabilities for navigation; and machine-intelligence. This report discuss work that has been and will be done in these areas

  13. Development of an amphibious robot for visual inspection of APR1400 Npp IRWST strainer

    International Nuclear Information System (INIS)

    Jang, You Hyun; Kim, Jong Seog

    2014-01-01

    An amphibious inspection robot system (hereafter AIROS) is being developed to visually inspect the in-containment refueling storage water tank (hereafter IRWST) strainer in APR1400 instead of a human diver. Four IRWST strainers are located in the IRWST, which is filled with boric acid water. Each strainer has 108 sub-assembly strainer fin modules that should be inspected with the VT-3 method according to Reg. guide 1.82 and the operation manual. AIROS has 6 thrusters for submarine voyage and 4 legs for walking on the top of the strainer. An inverse kinematic algorithm was implemented in the robot controller for exact walking on the top of the IRWST strainer. The IRWST strainer has several top cross braces that are extruded on the top of the strainer, which can be obstacles of walking on the strainer, to maintain the frame of the strainer. Therefore, a robot leg should arrive at the position beside the top cross brace. For this reason, we used an image processing technique to find the top cross brace in the sole camera image. The sole camera image is processed to find the existence of the top cross brace using the cross edge detection algorithm in real time. A 5-DOF robot arm that has multiple camera modules for simultaneous inspection of both sides can penetrate narrow gaps. For intuitive presentation of inspection results and for management of inspection data, inspection images are stored in the control PC with camera angles and positions to synthesize and merge the images. The synthesized images are then mapped in a 3D CAD model of the IRWST strainer with the location information. An IRWST strainer mock-up was fabricated to teach the robot arm scanning and gaiting. It is important to arrive at the designated position for inserting the robot arm into all of the gaps. Exact position control without anchor under the water is not easy. Therefore, we designed the multi leg robot for the role of anchoring and positioning. Quadruped robot design of installing sole

  14. Development of an amphibious robot for visual inspection of APR1400 Npp IRWST strainer

    Energy Technology Data Exchange (ETDEWEB)

    Jang, You Hyun; Kim, Jong Seog [Korea Hydro Nuclear Power Central Research Institute, Daejeon (Korea, Republic of)

    2014-06-15

    An amphibious inspection robot system (hereafter AIROS) is being developed to visually inspect the in-containment refueling storage water tank (hereafter IRWST) strainer in APR1400 instead of a human diver. Four IRWST strainers are located in the IRWST, which is filled with boric acid water. Each strainer has 108 sub-assembly strainer fin modules that should be inspected with the VT-3 method according to Reg. guide 1.82 and the operation manual. AIROS has 6 thrusters for submarine voyage and 4 legs for walking on the top of the strainer. An inverse kinematic algorithm was implemented in the robot controller for exact walking on the top of the IRWST strainer. The IRWST strainer has several top cross braces that are extruded on the top of the strainer, which can be obstacles of walking on the strainer, to maintain the frame of the strainer. Therefore, a robot leg should arrive at the position beside the top cross brace. For this reason, we used an image processing technique to find the top cross brace in the sole camera image. The sole camera image is processed to find the existence of the top cross brace using the cross edge detection algorithm in real time. A 5-DOF robot arm that has multiple camera modules for simultaneous inspection of both sides can penetrate narrow gaps. For intuitive presentation of inspection results and for management of inspection data, inspection images are stored in the control PC with camera angles and positions to synthesize and merge the images. The synthesized images are then mapped in a 3D CAD model of the IRWST strainer with the location information. An IRWST strainer mock-up was fabricated to teach the robot arm scanning and gaiting. It is important to arrive at the designated position for inserting the robot arm into all of the gaps. Exact position control without anchor under the water is not easy. Therefore, we designed the multi leg robot for the role of anchoring and positioning. Quadruped robot design of installing sole

  15. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks.

    Science.gov (United States)

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P

    2017-01-07

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  16. Forgetting Bad Behavior: Memory Management for Case-Based Navigation

    National Research Council Canada - National Science Library

    Kira, Zsolt; Arkin, Ronald C

    2006-01-01

    ...) system applied to autonomous robot navigation. This extends previous work that involved a CBR architecture that indexes cases by the spatio-temporal characteristics of the sensor data, and outputs or selects parameters of behaviors in a behavior...

  17. Adaptive Human-Aware Robot Navigation in Close Proximity to Humans

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2011-01-01

    For robots to be able coexist with people in future everyday human environments, they must be able to act in a safe, natural and comfortable way. This work addresses the motion of a mobile robot in an environment, where humans potentially want to interact with it. The designed system consists...... system that uses a potential field to derive motion that respects the personʹs social zones and perceived interest in interaction. The operation of the system is evaluated in a controlled scenario in an open hall environment. It is demonstrated that the robot is able to learn to estimate if a person...... wishes to interact, and that the system is capable of adapting to changing behaviours of the humans in the environment....

  18. SeSaMoNet 2.0: Improving a Navigation System for Visually Impaired People

    Science.gov (United States)

    Ceipidor, Ugo Biader; Medaglia, Carlo Maria; Sciarretta, Eliseo

    The authors present the improvements obtained during the work done for the last installation of SeSaMoNet, a navigation system for blind people. First the mobility issues of visually impaired people are shown together with strategies to solve them. Then an overview of the system and of its main elements is given. Afterward, the reasons which brought to a re-design are explained and finally the main features of the last system revision for the application are presented and compared to the previous one.

  19. Justification of the technical requirements of a fully functional modular robot

    Directory of Open Access Journals (Sweden)

    Shlyakhov Nikita

    2017-01-01

    Full Text Available Modular robots are characterized by limited built-in resources necessary for communication, connection and movement of modules, when performing reconfiguration tasks at rigidly interconnected elements. In developing the technological fundamentals of designing modular robots with pairwise connection mechanisms, we analysed modern hardware and model algorithms typical of a fully functional robot, which provide independent locomotion, communication, navigation, decentralized power and control. A survey of actuators, batteries, sensors, communication means, suitable for modular robotics is presented.

  20. Combining Hector SLAM and Artificial Potential Field for Autonomous Navigation Inside a Greenhouse

    Directory of Open Access Journals (Sweden)

    El Houssein Chouaib Harik

    2018-05-01

    Full Text Available The key factor for autonomous navigation is efficient perception of the surroundings, while being able to move safely from an initial to a final point. We deal in this paper with a wheeled mobile robot working in a GPS-denied environment typical for a greenhouse. The Hector Simultaneous Localization and Mapping (SLAM approach is used in order to estimate the robots’ pose using a LIght Detection And Ranging (LIDAR sensor. Waypoint following and obstacle avoidance are ensured by means of a new artificial potential field (APF controller presented in this paper. The combination of the Hector SLAM and the APF controller allows the mobile robot to perform periodic tasks that require autonomous navigation between predefined waypoints. It also provides the mobile robot with a robustness to changing conditions that may occur inside the greenhouse, caused by the dynamic of plant development through the season. In this study, we show that the robot is safe to operate autonomously with a human presence, and that in contrast to classical odometry methods, no calibration is needed for repositioning the robot over repetitive runs. We include here both hardware and software descriptions, as well as simulation and experimental results.

  1. High-Performance 3D Articulated Robot Display

    Science.gov (United States)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  2. High Precision GNSS Guidance for Field Mobile Robots

    Directory of Open Access Journals (Sweden)

    Ladislav Jurišica

    2012-11-01

    Full Text Available In this paper, we discuss GNSS (Global Navigation Satellite System guidance for field mobile robots. Several GNSS systems and receivers, as well as multiple measurement methods and principles of GNSS systems are examined. We focus mainly on sources of errors and investigate diverse approaches for precise measuring and effective use of GNSS systems for real-time robot localization. The main body of the article compares two GNSS receivers and their measurement methods. We design, implement and evaluate several mathematical methods for precise robot localization.

  3. [Robotics in pediatric surgery].

    Science.gov (United States)

    Camps, J I

    2011-10-01

    Despite the extensive use of robotics in the adult population, the use of robotics in pediatrics has not been well accepted. There is still a lack of awareness from pediatric surgeons on how to use the robotic equipment, its advantages and indications. Benefit is still controversial. Dexterity and better visualization of the surgical field are one of the strong values. Conversely, cost and a lack of small instruments prevent the use of robotics in the smaller patients. The aim of this manuscript is to present the controversies about the use of robotics in pediatric surgery.

  4. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  5. Air Force construction automation/robotics

    Science.gov (United States)

    Nease, AL; Dusseault, Christopher

    1994-01-01

    The Air Force has several unique requirements that are being met through the development of construction robotic technology. The missions associated with these requirements place construction/repair equipment operators in potentially harmful situations. Additionally, force reductions require that human resources be leveraged to the maximum extent possible and that more stringent construction repair requirements push for increased automation. To solve these problems, the U.S. Air Force is undertaking a research and development effort at Tyndall AFB, FL to develop robotic teleoperation, telerobotics, robotic vehicle communications, automated damage assessment, vehicle navigation, mission/vehicle task control architecture, and associated computing environment. The ultimate goal is the fielding of robotic repair capability operating at the level of supervised autonomy. The authors of this paper will discuss current and planned efforts in construction/repair, explosive ordnance disposal, hazardous waste cleanup, fire fighting, and space construction.

  6. Robotic digital subtraction angiography systems within the hybrid operating room.

    Science.gov (United States)

    Murayama, Yuichi; Irie, Koreaki; Saguchi, Takayuki; Ishibashi, Toshihiro; Ebara, Masaki; Nagashima, Hiroyasu; Isoshima, Akira; Arakawa, Hideki; Takao, Hiroyuki; Ohashi, Hiroki; Joki, Tatsuhiro; Kato, Masataka; Tani, Satoshi; Ikeuchi, Satoshi; Abe, Toshiaki

    2011-05-01

    Fully equipped high-end digital subtraction angiography (DSA) within the operating room (OR) environment has emerged as a new trend in the fields of neurosurgery and vascular surgery. To describe initial clinical experience with a robotic DSA system in the hybrid OR. A newly designed robotic DSA system (Artis zeego; Siemens AG, Forchheim, Germany) was installed in the hybrid OR. The system consists of a multiaxis robotic C arm and surgical OR table. In addition to conventional neuroendovascular procedures, the system was used as an intraoperative imaging tool for various neurosurgical procedures such as aneurysm clipping and spine instrumentation. Five hundred one neurosurgical procedures were successfully conducted in the hybrid OR with the robotic DSA. During surgical procedures such as aneurysm clipping and arteriovenous fistula treatment, intraoperative 2-/3-dimensional angiography and C-arm-based computed tomographic images (DynaCT) were easily performed without moving the OR table. Newly developed virtual navigation software (syngo iGuide; Siemens AG) can be used in frameless navigation and in access to deep-seated intracranial lesions or needle placement. This newly developed robotic DSA system provides safe and precise treatment in the fields of endovascular treatment and neurosurgery.

  7. A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation

    National Research Council Canada - National Science Library

    Rosenblum, Mark; Gothard, Benny

    2006-01-01

    .... In the military sense, appropriate navigation implies the robot will avoid collision or contact with hazards, will not be falsely re-routed around traversible terrain due to false hazard detections...

  8. Robot assisted navigated drilling for percutaneous pedicle screw placement: A preliminary animal study

    Directory of Open Access Journals (Sweden)

    Hongwei Wang

    2015-01-01

    Conclusions: The preliminary study supports the view that computer assisted pedicle screw fixation using spinal robot is feasible and the robot can decrease the intraoperative fluoroscopy time during the minimally invasive pedicle screw fixation surgery. As spine robotic surgery is still in its infancy, further research in this field is worthwhile especially the accuracy of spine robot system should be improved.

  9. Human-like robots for space and hazardous environments

    Science.gov (United States)

    1994-01-01

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  10. Real-Time fusion of visual images and laser data images for safe navigation in outdoor environments

    OpenAIRE

    García-Alegre Sánchez, María C.; Martín, David; Guinea García-Alegre, Domingo M.; Guinea Díaz, Domingo

    2011-01-01

    [EN]In recent years, two dimensional laser range finders mounted on vehicles is becoming a fruitful solution to achieve safety and environment recognition requirements (Keicher & Seufert, 2000), (Stentz et al., 2002), (DARPA, 2007). They provide real-time accurate range measurements in large angular fields at a fixed height above the ground plane, and enable robots and vehicles to perform more confidently a variety of tasks by fusing images from visual cameras with range data (...

  11. Shape Perception and Navigation in Blind Adults

    Science.gov (United States)

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  12. Hydraulic bilateral construction robot; Yuatsushiki bilateral kensetsu robot

    Energy Technology Data Exchange (ETDEWEB)

    Maehata, K.; Mori, N. [Kayaba Industry Co. Ltd., Tokyo (Japan)

    1999-05-15

    Concerning a hydraulic bilateral construction robot, its system constitution, structures and functions of important components, and the results of some tests are explained, and the researches conducted at Gifu University are described. The construction robot in this report is a servo controlled system of a version developed from the mini-shovel now available in the market. It is equipped, in addition to an electrohydraulic servo control system, with various sensors for detecting the robot attitude, vibration, and load state, and with a camera for visualizing the surrounding landscape. It is also provided with a bilateral joy stick which is a remote control actuator capable of working sensation feedback and with a rocking unit that creates robot movements of rolling, pitching, and heaving. The construction robot discussed here, with output increased and response faster thanks to the employment of a hydraulic driving system for the aim of building a robot system superior in performance to the conventional model designed primarily for heavy duty, proves after tests to be a highly sophisticated remotely controlled robot control system. (NEDO)

  13. CLARAty: Challenges and Steps Toward Reusable Robotic Software

    Directory of Open Access Journals (Sweden)

    Richard Madison

    2008-11-01

    Full Text Available We present in detail some of the challenges in developing reusable robotic software. We base that on our experience in developing the CLARAty robotics software, which is a generic object-oriented framework used for the integration of new algorithms in the areas of motion control, vision, manipulation, locomotion, navigation, localization, planning and execution. CLARAty was adapted to a number of heterogeneous robots with different mechanisms and hardware control architectures. In this paper, we also describe how we addressed some of these challenges in the development of the CLARAty software.

  14. CLARAty: Challenges and Steps toward Reusable Robotic Software

    Directory of Open Access Journals (Sweden)

    Issa A.D. Nesnas

    2006-03-01

    Full Text Available We present in detail some of the challenges in developing reusable robotic software. We base that on our experience in developing the CLARAty robotics software, which is a generic object-oriented framework used for the integration of new algorithms in the areas of motion control, vision, manipulation, locomotion, navigation, localization, planning and execution. CLARAty was adapted to a number of heterogeneous robots with different mechanisms and hardware control architectures. In this paper, we also describe how we addressed some of these challenges in the development of the CLARAty software.

  15. Development of a surveillance robot for dimensional and visual inspection of fuel and reflector elements from the Fort St. Vrain HTGR

    International Nuclear Information System (INIS)

    Wallroth, C.F.; Marsh, N.I.; Miller, C.M.; Saurwein, J.J.; Smith, T.L.

    1979-11-01

    A robotic device has been developed for dimensional and visual inspection of irradiated HTGR core components. The robot consists of a rotary table and a two-finger probe, driven by stepping motors, and four remotely controlled television cameras. Automated operation is accomplished via minicomputer control. A total of 51 irradiated fuel and reflector elements were inspected at a fraction of the time and cost required for conventional methods

  16. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  17. A Case Study on a Capsule Robot in the Gastrointestinal Tract to Teach Robot Programming and Navigation

    Science.gov (United States)

    Guo, Yi; Zhang, Shubo; Ritter, Arthur; Man, Hong

    2014-01-01

    Despite the increasing importance of robotics, there is a significant challenge involved in teaching this to undergraduate students in biomedical engineering (BME) and other related disciplines in which robotics techniques could be readily applied. This paper addresses this challenge through the development and pilot testing of a bio-microrobotics…

  18. Experiences with a Barista Robot, FusionBot

    Science.gov (United States)

    Limbu, Dilip Kumar; Tan, Yeow Kee; Wong, Chern Yuen; Jiang, Ridong; Wu, Hengxin; Li, Liyuan; Kah, Eng Hoe; Yu, Xinguo; Li, Dong; Li, Haizhou

    In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.

  19. Optimum path planning of mobile robot in unknown static and dynamic environments using Fuzzy-Wind Driven Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Anish Pandey

    2017-02-01

    Full Text Available This article introduces a singleton type-1 fuzzy logic system (T1-SFLS controller and Fuzzy-WDO hybrid for the autonomous mobile robot navigation and collision avoidance in an unknown static and dynamic environment. The WDO (Wind Driven Optimization algorithm is used to optimize and tune the input/output membership function parameters of the fuzzy controller. The WDO algorithm is working based on the atmospheric motion of infinitesimal small air parcels navigates over an N-dimensional search domain. The performance of this proposed technique has compared through many computer simulations and real-time experiments by using Khepera-III mobile robot. As compared to the T1-SFLS controller the Fuzzy-WDO algorithm is found good agreement for mobile robot navigation.

  20. Hand/Eye Coordination For Fine Robotic Motion

    Science.gov (United States)

    Lokshin, Anatole M.

    1992-01-01

    Fine motions of robotic manipulator controlled with help of visual feedback by new method reducing position errors by order of magnitude. Robotic vision subsystem includes five cameras: three stationary ones providing wide-angle views of workspace and two mounted on wrist of auxiliary robot arm. Stereoscopic cameras on arm give close-up views of object and end effector. Cameras measure errors between commanded and actual positions and/or provide data for mapping between visual and manipulator-joint-angle coordinates.

  1. 4th IFToMM International Symposium on Robotics and Mechatronics

    CERN Document Server

    Laribi, Med; Gazeau, Jean-Pierre

    2016-01-01

    This volume contains papers that have been selected after review for oral presentation at ISRM 2015, the Fourth IFToMM International Symposium on Robotics and Mechatronics held in Poitiers, France 23-24 June 2015. These papers  provide a vision of the evolution of the disciplines of robotics and mechatronics, including but not limited to: mechanism design; modeling and simulation; kinematics and dynamics of multibody systems; control methods; navigation and motion planning; sensors and actuators; bio-robotics; micro/nano-robotics; complex robotic systems; walking machines, humanoids-parallel kinematic structures: analysis and synthesis; smart devices; new design; application and prototypes. The book can be used by researchers and engineers in the relevant areas of robotics and mechatronics.

  2. Ultra-Wideband Tracking System Design for Relative Navigation

    Science.gov (United States)

    Ni, Jianjun David; Arndt, Dickey; Bgo, Phong; Dekome, Kent; Dusl, John

    2011-01-01

    This presentation briefly discusses a design effort for a prototype ultra-wideband (UWB) time-difference-of-arrival (TDOA) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being designed for use in localization and navigation of a rover in a GPS deprived environment for surface missions. In one application enabled by the UWB tracking, a robotic vehicle carrying equipments can autonomously follow a crewed rover from work site to work site such that resources can be carried from one landing mission to the next thereby saving up-mass. The UWB Systems Group at JSC has developed a UWB TDOA High Resolution Proximity Tracking System which can achieve sub-inch tracking accuracy of a target within the radius of the tracking baseline [1]. By extending the tracking capability beyond the radius of the tracking baseline, a tracking system is being designed to enable relative navigation between two vehicles for surface missions. A prototype UWB TDOA tracking system has been designed, implemented, tested, and proven feasible for relative navigation of robotic vehicles. Future work includes testing the system with the application code to increase the tracking update rate and evaluating the linear tracking baseline to improve the flexibility of antenna mounting on the following vehicle.

  3. Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Mo, Se Hyun [Amotech, Seoul (Korea, Republic of); Jeon, Young Pil [Samsung Electronics Co., Ltd. Suwon (Korea, Republic of); Park, Jong Ho [Seonam Univ., Namwon (Korea, Republic of); Chong, Kil To [Chon-buk Nat' 1 Univ., Junju (Korea, Republic of)

    2017-07-15

    With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

  4. Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates

    International Nuclear Information System (INIS)

    Mo, Se Hyun; Jeon, Young Pil; Park, Jong Ho; Chong, Kil To

    2017-01-01

    With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

  5. A 3-D Miniature LIDAR System for Mobile Robot Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Future lunar initiatives will demand sophisticated operation of mobile robotics platforms. In particular, lunar site operations will benefit from robots, both...

  6. Developing Autonomous Vehicles That Learn to Navigate by Mimicking Human Behavior

    Science.gov (United States)

    2006-09-28

    navigate in an unstructured environment to a specific target or location. 15. SUBJECT TERMS autonomous vehicles , fuzzy logic, learning behavior...ANSI-Std Z39-18 Developing Autonomous Vehicles That Learn to Navigate by Mimicking Human Behavior FINAL REPORT 9/28/2006 Dean B. Edwards Department...the future, as greater numbers of autonomous vehicles are employed, it is hoped that lower LONG-TERM GOALS Use LAGR (Learning Applied to Ground Robots

  7. Position Based Visual Servoing control of a Wheelchair Mounter Robotic Arm using Parallel Tracking and Mapping of task objects

    Directory of Open Access Journals (Sweden)

    Alessandro Palla

    2017-05-01

    Full Text Available In the last few years power wheelchairs have been becoming the only device able to provide autonomy and independence to people with motor skill impairments. In particular, many power wheelchairs feature robotic arms for gesture emulation, like the interaction with objects. However, complex robotic arms often require a joystic to be controlled; this feature make the arm hard to be controlled by impaired users. Paradoxically, if the user were able to proficiently control such devices, he would not need them. For that reason, this paper presents a highly autonomous robotic arm, designed in order to minimize the effort necessary for the control of the arm. In order to do that, the arm feature an easy to use human - machine interface and is controlled by Computer Vison algorithm, implementing a Position Based Visual Servoing (PBVS control. It was realized by extracting features by the camera and fusing them with the distance from the target, obtained by a proximity sensor. The Parallel Tracking and Mapping (PTAM algorithm was used to find the 3D position of the task object in the camera reference system. The visual servoing algorithm was implemented in an embedded platform, in real time. Each part of the control loop was developed in Robotic Operative System (ROS Environment, which allows to implement the previous algorithms as different nodes. Theoretical analysis, simulations and in system measurements proved the effectiveness of the proposed solution.

  8. Cloud-Induced Uncertainty for Visual Navigation

    Science.gov (United States)

    2014-12-26

    can occur due to interference, jamming, or signal blockage in urban canyons. In GPS-denied environments, a GP- S/INS navigation system is forced to rely...physics-based approaches use equations that model fluid flow, thermodynamics, water condensation , and evapora- tion to generate clouds [4]. The drawback

  9. DEVELOPMENT OF AN AMPHIBIOUS ROBOT FOR VISUAL INSPECTION OF APR1400 NPP IRWST STRAINER ASSEMBLY

    Directory of Open Access Journals (Sweden)

    YOU HYUN JANG

    2014-06-01

    Full Text Available An amphibious inspection robot system (hereafter AIROS is being developed to visually inspect the in-containment refueling storage water tank (hereafter IRWST strainer in APR1400 instead of a human diver. Four IRWST strainers are located in the IRWST, which is filled with boric acid water. Each strainer has 108 sub-assembly strainer fin modules that should be inspected with the VT-3 method according to Reg. guide 1.82 and the operation manual. AIROS has 6 thrusters for submarine voyage and 4 legs for walking on the top of the strainer. An inverse kinematic algorithm was implemented in the robot controller for exact walking on the top of the IRWST strainer. The IRWST strainer has several top cross braces that are extruded on the top of the strainer, which can be obstacles of walking on the strainer, to maintain the frame of the strainer. Therefore, a robot leg should arrive at the position beside the top cross brace. For this reason, we used an image processing technique to find the top cross brace in the sole camera image. The sole camera image is processed to find the existence of the top cross brace using the cross edge detection algorithm in real time. A 5-DOF robot arm that has multiple camera modules for simultaneous inspection of both sides can penetrate narrow gaps. For intuitive presentation of inspection results and for management of inspection data, inspection images are stored in the control PC with camera angles and positions to synthesize and merge the images. The synthesized images are then mapped in a 3D CAD model of the IRWST strainer with the location information. An IRWST strainer mock-up was fabricated to teach the robot arm scanning and gaiting. It is important to arrive at the designated position for inserting the robot arm into all of the gaps. Exact position control without anchor under the water is not easy. Therefore, we designed the multi leg robot for the role of anchoring and positioning. Quadruped robot design of

  10. Mobile Robots for Hospital Logistics

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan

    services to maintain the quality of healthcare provided. Logistics is the most resource demanding service in a hospital. The scale of the transportation tasks is huge and the material flow in a hospital is comparable to that of a factory. We believe that these transportation tasks, to a great extent, can...... be and will be automated using mobile robots. This talk consequently addresses the key technical issues of implementing service robots in hospitals. In simple terms, a robotic system for automating hospital logistics has to be reliable, adaptable and scalable. Robots have to be semi-autonomous, and should reliably...... navigate in large and dynamic environments in the hospital. The complexity of the problem has to be manageable, and the solutions have to be flexible, so that the system can be applicable in real world settings. This talk summarizes the efforts to address these issues. Upon the analysis...

  11. Drum inspection robots: Application development

    International Nuclear Information System (INIS)

    Hazen, F.B.; Warner, R.D.

    1996-01-01

    Throughout the Department of Energy (DOE), drums containing mixed and low level stored waste are inspected, as mandated by the Resource Conservation and Recovery Act (RCRA) and other regulations. The inspections are intended to prevent leaks by finding corrosion long before the drums are breached. The DOE Office of Science and Technology (OST) has sponsored efforts towards the development of robotic drum inspectors. This emerging application for mobile and remote sensing has broad applicability for DOE and commercial waste storage areas. Three full scale robot prototypes have been under development, and another project has prototyped a novel technique to analyze robotically collected drum images. In general, the robots consist of a mobile, self-navigating base vehicle, outfitted with sensor packages so that rust and other corrosion cues can be automatically identified. They promise the potential to lower radiation dose and operator effort required, while improving diligence, consistency, and documentation

  12. Real-time simulation for intra-operative navigation in robotic surgery. Using a mass spring system for a basic study of organ deformation.

    Science.gov (United States)

    Kawamura, Kazuya; Kobayashi, Yo; Fujie, Masakatsu G

    2007-01-01

    Medical technology has advanced with the introduction of robot technology, making previous medical treatments that were very difficult far more possible. However, operation of a surgical robot demands substantial training and continual practice on the part of the surgeon because it requires difficult techniques that are different from those of traditional surgical procedures. We focused on a simulation technology based on the physical characteristics of organs. In this research, we proposed the development of surgical simulation, based on a physical model, for intra-operative navigation by a surgeon. In this paper, we describe the design of our system, in particular our organ deformation calculator. The proposed simulation system consists of an organ deformation calculator and virtual slave manipulators. We obtained adequate experimental results of a target node at a nearby point of interaction, because this point ensures better accuracy for our simulation model. The next research step would be to focus on a surgical environment in which internal organ models would be integrated into a slave simulation system.

  13. A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor

    Science.gov (United States)

    Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.

    2015-01-01

    For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135

  14. Learning for Autonomous Navigation

    Science.gov (United States)

    Angelova, Anelia; Howard, Andrew; Matthies, Larry; Tang, Benyang; Turmon, Michael; Mjolsness, Eric

    2005-01-01

    Robotic ground vehicles for outdoor applications have achieved some remarkable successes, notably in autonomous highway following (Dickmanns, 1987), planetary exploration (1), and off-road navigation on Earth (1). Nevertheless, major challenges remain to enable reliable, high-speed, autonomous navigation in a wide variety of complex, off-road terrain. 3-D perception of terrain geometry with imaging range sensors is the mainstay of off-road driving systems. However, the stopping distance at high speed exceeds the effective lookahead distance of existing range sensors. Prospects for extending the range of 3-D sensors is strongly limited by sensor physics, eye safety of lasers, and related issues. Range sensor limitations also allow vehicles to enter large cul-de-sacs even at low speed, leading to long detours. Moreover, sensing only terrain geometry fails to reveal mechanical properties of terrain that are critical to assessing its traversability, such as potential for slippage, sinkage, and the degree of compliance of potential obstacles. Rovers in the Mars Exploration Rover (MER) mission have got stuck in sand dunes and experienced significant downhill slippage in the vicinity of large rock hazards. Earth-based off-road robots today have very limited ability to discriminate traversable vegetation from non-traversable vegetation or rough ground. It is impossible today to preprogram a system with knowledge of these properties for all types of terrain and weather conditions that might be encountered.

  15. Using visual feedback distortion to alter coordinated pinching patterns for robotic rehabilitation

    Directory of Open Access Journals (Sweden)

    Brewer Bambi R

    2007-05-01

    Full Text Available Abstract Background It is common for individuals with chronic disabilities to continue using the compensatory movement coordination due to entrenched habits, increased perception of task difficulty, or personality variables such as low self-efficacy or a fear of failure. Following our previous work using feedback distortion in a virtual rehabilitation environment to increase strength and range of motion, we address the use of visual feedback distortion environment to alter movement coordination patterns. Methods Fifty-one able-bodied subjects participated in the study. During the experiment, each subject learned to move their index finger and thumb in a particular target pattern while receiving visual feedback. Visual distortion was implemented as a magnification of the error between the thumb and/or index finger position and the desired position. The error reduction profile and the effect of distortion were analyzed by comparing the mean total absolute error and a normalized error that measured performance improvement for each subject as a proportion of the baseline error. Results The results of the study showed that (1 different coordination pattern could be trained with visual feedback and have the new pattern transferred to trials without visual feedback, (2 distorting individual finger at a time allowed different error reduction profile from the controls, and (3 overall learning was not sped up by distorting individual fingers. Conclusion It is important that robotic rehabilitation incorporates multi-limb or finger coordination tasks that are important for activities of daily life in the near future. This study marks the first investigation on multi-finger coordination tasks under visual feedback manipulation.

  16. Ultrasound-based tumor movement compensation during navigated laparoscopic liver interventions.

    Science.gov (United States)

    Shahin, Osama; Beširević, Armin; Kleemann, Markus; Schlaefer, Alexander

    2014-05-01

    Image-guided navigation aims to provide better orientation and accuracy in laparoscopic interventions. However, the ability of the navigation system to reflect anatomical changes and maintain high accuracy during the procedure is crucial. This is particularly challenging in soft organs such as the liver, where surgical manipulation causes significant tumor movements. We propose a fast approach to obtain an accurate estimation of the tumor position throughout the procedure. Initially, a three-dimensional (3D) ultrasound image is reconstructed and the tumor is segmented. During surgery, the position of the tumor is updated based on newly acquired tracked ultrasound images. The initial segmentation of the tumor is used to automatically detect the tumor and update its position in the navigation system. Two experiments were conducted. First, a controlled phantom motion using a robot was performed to validate the tracking accuracy. Second, a needle navigation scenario based on pseudotumors injected into ex vivo porcine liver was studied. In the robot-based evaluation, the approach estimated the target location with an accuracy of 0.4 ± 0.3 mm. The mean navigation error in the needle experiment was 1.2 ± 0.6 mm, and the algorithm compensated for tumor shifts up to 38 mm in an average time of 1 s. We demonstrated a navigation approach based on tracked laparoscopic ultrasound (LUS), and focused on the neighborhood of the tumor. Our experimental results indicate that this approach can be used to quickly and accurately compensate for tumor movements caused by surgical manipulation during laparoscopic interventions. The proposed approach has the advantage of being based on the routinely used LUS; however, it upgrades its functionality to estimate the tumor position in 3D. Hence, the approach is repeatable throughout surgery, and enables high navigation accuracy to be maintained.

  17. Underground mine navigation using an integrated IMU/TOF system with unscented Kalman filter

    CSIR Research Space (South Africa)

    Hlophe, K

    2011-07-01

    Full Text Available & Factories of the Future Conference, 26-28 July 2011, Kuala Lumpur, Malaysia improve mine safety?, in 25th International Conference of CAD/CAM, Robotics & Factories of the Future, Pretoria, 2010. [2] J. J. Green and D. Vogt, Robot miner for low... Page 1 of 11 26th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 26-28 July 2011, Kuala Lumpur, Malaysia UNDERGROUND MINE NAVIGATION USING AN INTERGRATED IMU/TOF SYSTEM WITH UNSCENTED KALMAN FILTER...

  18. Real-Time Inverse Optimal Neural Control for Image Based Visual Servoing with Nonholonomic Mobile Robots

    Directory of Open Access Journals (Sweden)

    Carlos López-Franco

    2015-01-01

    Full Text Available We present an inverse optimal neural controller for a nonholonomic mobile robot with parameter uncertainties and unknown external disturbances. The neural controller is based on a discrete-time recurrent high order neural network (RHONN trained with an extended Kalman filter. The reference velocities for the neural controller are obtained with a visual sensor. The effectiveness of the proposed approach is tested by simulations and real-time experiments.

  19. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots.

    Science.gov (United States)

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-11-25

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.

  20. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots

    Directory of Open Access Journals (Sweden)

    Tae Hyeon Nam

    2017-11-01

    Full Text Available Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.