WorldWideScience

Sample records for robots performing navigation

  1. Human-robot collaborative navigation for autonomous maintenance management of nuclear installation

    International Nuclear Information System (INIS)

    Nugroho, Djoko Hari

    2002-01-01

    Development of human and robot collaborative navigation for autonomous maintenance management of nuclear installation has been conducted. The human-robot collaborative system is performed using a switching command between autonomous navigation and manual navigation that incorporate a human intervention. The autonomous navigation path is conducted using a novel algorithm of MLG method based on Lozano-Perez s visibility graph. The MLG optimizes the shortest distance and safe constraints. While the manual navigation is performed using manual robot tele operation tools. Experiment in the MLG autonomous navigation system is conducted for six times with 3-D starting point and destination point coordinate variation. The experiment shows a good performance of autonomous robot maneuver to avoid collision with obstacle. The switching navigation is well interpreted using open or close command to RS-232C constructed using LabVIEW

  2. Benchmark Framework for Mobile Robots Navigation Algorithms

    Directory of Open Access Journals (Sweden)

    Nelson David Muñoz-Ceballos

    2014-01-01

    Full Text Available Despite the wide variety of studies and research on mobile robot systems, performance metrics are not often examined. This makes difficult to establish an objective comparison of achievements. In this paper, the navigation of an autonomous mobile robot is evaluated. Several metrics are described. These metrics, collectively, provide an indication of navigation quality, useful for comparing and analyzing navigation algorithms of mobile robots. This method is suggested as an educational tool, which allows the student to optimize the algorithms quality, relating to important aspectsof science, technology and engineering teaching, as energy consumption, optimization and design.

  3. Navigation Strategy by Contact Sensing Interaction for a Biped Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Hanafiah Yussof

    2008-11-01

    Full Text Available This report presents a basic contact interaction-based navigation strategy for a biped humanoid robot to support current visual-based navigation. The robot's arms were equipped with force sensors to detect physical contact with objects. We proposed a motion algorithm consisting of searching tasks, self-localization tasks, correction of locomotion direction tasks and obstacle avoidance tasks. Priority was given to right-side direction to navigate the robot locomotion. Analysis of trajectory generation, biped gait pattern, and biped walking characteristics was performed to define an efficient navigation strategy in a biped walking humanoid robot. The proposed algorithm is evaluated in an experiment with a 21-dofs humanoid robot operating in a room with walls and obstacles. The experimental results reveal good robot performance when recognizing objects by touching, grasping, and continuously generating suitable trajectories to correct direction and avoid collisions.

  4. Control algorithms for autonomous robot navigation

    International Nuclear Information System (INIS)

    Jorgensen, C.C.

    1985-01-01

    This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced

  5. Solar-based navigation for robotic explorers

    Science.gov (United States)

    Shillcutt, Kimberly Jo

    2000-12-01

    This thesis introduces the application of solar position and shadowing information to robotic exploration. Power is a critical resource for robots with remote, long-term missions, so this research focuses on the power generation capabilities of robotic explorers during navigational tasks, in addition to power consumption. Solar power is primarily considered, with the possibility of wind power also contemplated. Information about the environment, including the solar ephemeris, terrain features, time of day, and surface location, is incorporated into a planning structure, allowing robots to accurately predict shadowing and thus potential costs and gains during navigational tasks. By evaluating its potential to generate and expend power, a robot can extend its lifetime and accomplishments. The primary tasks studied are coverage patterns, with a variety of plans developed for this research. The use of sun, terrain and temporal information also enables new capabilities of identifying and following sun-synchronous and sun-seeking paths. Digital elevation maps are combined with an ephemeris algorithm to calculate the altitude and azimuth of the sun from surface locations, and to identify and map shadows. Solar navigation path simulators use this information to perform searches through two-dimensional space, while considering temporal changes. Step by step simulations of coverage patterns also incorporate time in addition to location. Evaluations of solar and wind power generation, power consumption, area coverage, area overlap, and time are generated for sets of coverage patterns, with on-board environmental information linked to the simulations. This research is implemented on the Nomad robot for the Robotic Antarctic Meteorite Search. Simulators have been developed for coverage pattern tests, as well as for sun-synchronous and sun-seeking path searches. Results of field work and simulations are reported and analyzed, with demonstrated improvements in efficiency

  6. A Qualitative Approach to Mobile Robot Navigation Using RFID

    International Nuclear Information System (INIS)

    Hossain, M; Rashid, M M; Bhuiyan, M M I; Ahmed, S; Akhtaruzzaman, M

    2013-01-01

    Radio Frequency Identification (RFID) system allows automatic identification of items with RFID tags using radio-waves. As the RFID tag has its unique identification number, it is also possible to detect a specific region where the RFID tag lies in. Recently it is widely been used in mobile robot navigation, localization, and mapping both in indoor and outdoor environment. This paper represents a navigation strategy for autonomous mobile robot using passive RFID system. Conventional approaches, such as landmark or dead-reckoning with excessive number of sensors, have complexities in establishing the navigation and localization process. The proposed method satisfies less complexity in navigation strategy as well as estimation of not only the position but also the orientation of the autonomous robot. In this research, polar coordinate system is adopted on the navigation surface where RFID tags are places in a grid with constant displacements. This paper also presents the performance comparisons among various grid architectures through simulation to establish a better solution of the navigation system. In addition, some stationary obstacles are introduced in the navigation environment to satisfy the viability of the navigation process of the autonomous mobile robot

  7. Autonomous Robot Navigation based on Visual Landmarks

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2005-01-01

    The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully a...... automatically learn and store visual landmarks, and later recognize these landmarks from arbitrary positions and thus estimate robot position and heading.......The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully...... autonomous navigation and self-localization using automatically selected landmarks. The thesis investigates autonomous robot navigation and proposes a new method which benefits from the potential of the visual sensor to provide accuracy and reliability to the navigation process while relying on naturally...

  8. Mobile Robot Designed with Autonomous Navigation System

    Science.gov (United States)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  9. A fuzzy logic based navigation for mobile robot

    International Nuclear Information System (INIS)

    Adel Ali S Al-Jumaily; Shamsudin M Amin; Mohamed Khalil

    1998-01-01

    The main issue of intelligent robot is how to reach its goal safely in real time when it moves in unknown environment. The navigational planning is becoming the central issue in development of real-time autonomous mobile robots. Behaviour based robots have been successful in reacting with dynamic environment but still there are some complexity and challenging problems. Fuzzy based behaviours present as powerful method to solve the real time reactive navigation problems in unknown environment. We shall classify the navigation generation methods, five some characteristics of these methods, explain why fuzzy logic is suitable for the navigation of mobile robot and automated guided vehicle, and describe a reactive navigation that is flexible to react through their behaviours to the change of the environment. Some simulation results will be presented to show the navigation of the robot. (Author)

  10. Neurosurgical robotic arm drilling navigation system.

    Science.gov (United States)

    Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai

    2017-09-01

    The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Bio-robots automatic navigation with electrical reward stimulation.

    Science.gov (United States)

    Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2012-01-01

    Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.

  12. Robotics_MobileRobot Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Robots and rovers exploring planets need to autonomously navigate to specified locations. Advanced Scientific Concepts, Inc. (ASC) and the University of Minnesota...

  13. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  14. Laboratory experiments in mobile robot navigation

    International Nuclear Information System (INIS)

    Kar, Asim; Pal, Prabir K.

    1997-01-01

    Mobile robots have potential applications in remote surveillance and operation in hazardous areas. To be effective, they must have the ability to navigate on their own to desired locations. Several experimental navigational runs of a mobile robot developed have been conducted. The robot has three wheels of which the front wheel is steered and the hind wheels are driven. The robot is equipped with an ultrasonic range sensor, which is turned around to get range data in all directions. The range data is fed to the input of a neural net, whose output steers the robot towards the goal. The robot is powered by batteries (12V 10Ah). It has an onboard stepper motor controller for driving the wheels and the ultrasonic setup. It also has an onboard computer which runs the navigation program NAV. This program sends the range data and configuration parameters to the operator''s console program OCP, running on a stationary PC, through radio communication on a serial line. Through OCP, an operator can monitor the progress of the robot from a distant control room and intervene if necessary. In this paper the control modules of the mobile robot, its ways of operation and also results of some of the experimental runs recorded are reported. It is seen that the trained net guides the mobile robot through gaps of 1m and above to its destination with about 84% success measured over a small sample of 38 runs

  15. Building a grid-semantic map for the navigation of service robots through human–robot interaction

    Directory of Open Access Journals (Sweden)

    Cheng Zhao

    2015-11-01

    Full Text Available This paper presents an interactive approach to the construction of a grid-semantic map for the navigation of service robots in an indoor environment. It is based on the Robot Operating System (ROS framework and contains four modules, namely Interactive Module, Control Module, Navigation Module and Mapping Module. Three challenging issues have been focused during its development: (i how human voice and robot visual information could be effectively deployed in the mapping and navigation process; (ii how semantic names could combine with coordinate data in an online Grid-Semantic map; and (iii how a localization–evaluate–relocalization method could be used in global localization based on modified maximum particle weight of the particle swarm. A number of experiments are carried out in both simulated and real environments such as corridors and offices to verify its feasibility and performance.

  16. Development of an advanced intelligent robot navigation system

    International Nuclear Information System (INIS)

    Hai Quan Dai; Dalton, G.R.; Tulenko, J.; Crane, C.C. III

    1992-01-01

    As part of the US Department of Energy's Robotics for Advanced Reactors Project, the authors are in the process of assembling an advanced intelligent robotic navigation and control system based on previous work performed on this project in the areas of computer control, database access, graphical interfaces, shared data and computations, computer vision for positions determination, and sonar-based computer navigation systems. The system will feature three levels of goals: (1) high-level system for management of lower level functions to achieve specific functional goals; (2) intermediate level of goals such as position determination, obstacle avoidance, and discovering unexpected objects; and (3) other supplementary low-level functions such as reading and recording sonar or video camera data. In its current phase, the Cybermotion K2A mobile robot is not equipped with an onboard computer system, which will be included in the final phase. By that time, the onboard system will play important roles in vision processing and in robotic control communication

  17. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  18. Robot navigation in unknown terrains: Introductory survey of non-heuristic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V. [Oak Ridge National Lab., TN (US); Kareti, S.; Shi, Weimin [Old Dominion Univ., Norfolk, VA (US). Dept. of Computer Science; Iyengar, S.S. [Louisiana State Univ., Baton Rouge, LA (US). Dept. of Computer Science

    1993-07-01

    A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors consider the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.

  19. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    Directory of Open Access Journals (Sweden)

    Lobo Pereira Fernando

    2010-02-01

    Full Text Available Abstract Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous. The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI. Methods In this paper, a sequential Extended Kalman Filter (EKF feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how

  20. Evolutionary Fuzzy Control and Navigation for Two Wheeled Robots Cooperatively Carrying an Object in Unknown Environments.

    Science.gov (United States)

    Juang, Chia-Feng; Lai, Min-Ge; Zeng, Wan-Ting

    2015-09-01

    This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.

  1. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.

    Science.gov (United States)

    Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning

    2018-03-16

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  2. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  3. Navigation strategies for multiple autonomous mobile robots moving in formation

    Science.gov (United States)

    Wang, P. K. C.

    1991-01-01

    The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.

  4. Efficient Reactive Navigation with Exact Collision Determination for 3D Robot Shapes

    Directory of Open Access Journals (Sweden)

    Mariano Jaimez

    2015-05-01

    Full Text Available This paper presents a reactive navigator for wheeled mobile robots moving on a flat surface which takes into account both the actual 3D shape of the robot and the 3D surrounding obstacles. The robot volume is modelled by a number of prisms consecutive in height, and the detected obstacles, which can be provided by different kinds of range sensor, are segmented into these heights. Then, the reactive navigation problem is tackled by a number of concurrent 2D navigators, one for each prism, which are consistently and efficiently combined to yield an overall solution. Our proposal for each 2D navigator is based on the concept of the “Parameterized Trajectory Generator” which models the robot shape as a polygon and embeds its kinematic constraints into different motion models. Extensive testing has been conducted in office-like and real house environments, covering a total distance of 18.5 km, to demonstrate the reliability and effectiveness of the proposed method. Moreover, additional experiments are performed to highlight the advantages of a 3D-aware reactive navigator. The implemented code is available under an open-source licence.

  5. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    Science.gov (United States)

    Dou, Lihua; Su, Zhong; Liu, Ning

    2018-01-01

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515

  6. Markovian robots: Minimal navigation strategies for active particles

    Science.gov (United States)

    Nava, Luis Gómez; Großmann, Robert; Peruani, Fernando

    2018-04-01

    We explore minimal navigation strategies for active particles in complex, dynamical, external fields, introducing a class of autonomous, self-propelled particles which we call Markovian robots (MR). These machines are equipped with a navigation control system (NCS) that triggers random changes in the direction of self-propulsion of the robots. The internal state of the NCS is described by a Boolean variable that adopts two values. The temporal dynamics of this Boolean variable is dictated by a closed Markov chain—ensuring the absence of fixed points in the dynamics—with transition rates that may depend exclusively on the instantaneous, local value of the external field. Importantly, the NCS does not store past measurements of this value in continuous, internal variables. We show that despite the strong constraints, it is possible to conceive closed Markov chain motifs that lead to nontrivial motility behaviors of the MR in one, two, and three dimensions. By analytically reducing the complexity of the NCS dynamics, we obtain an effective description of the long-time motility behavior of the MR that allows us to identify the minimum requirements in the design of NCS motifs and transition rates to perform complex navigation tasks such as adaptive gradient following, detection of minima or maxima, or selection of a desired value in a dynamical, external field. We put these ideas in practice by assembling a robot that operates by the proposed minimalistic NCS to evaluate the robustness of MR, providing a proof of concept that is possible to navigate through complex information landscapes with such a simple NCS whose internal state can be stored in one bit. These ideas may prove useful for the engineering of miniaturized robots.

  7. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    Directory of Open Access Journals (Sweden)

    Xu Zhao

    2018-03-01

    Full Text Available A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS Inertial-Measurement-Unit (IMU. First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD. In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  8. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    Science.gov (United States)

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  9. Practical indoor mobile robot navigation using hybrid maps

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Fan, Zhun; Xiao, Jizhong

    2011-01-01

    This paper presents a practical navigation scheme for indoor mobile robots using hybrid maps. The method makes use of metric maps for local navigation and a topological map for global path planning. Metric maps are generated as 2D occupancy grids by a range sensor to represent local information...... about partial areas. The global topological map is used to indicate the connectivity of the 'places-of-interests' in the environment and the interconnectivity of the local maps. Visual tags on the ceiling to be detected by the robot provide valuable information and contribute to reliable localization...... robot and evaluated in a hospital environment....

  10. Navigation control of a multi-functional eye robot

    International Nuclear Information System (INIS)

    Ali, F.A.M.; Hashmi, B.; Younas, A.; Abid, B.

    2016-01-01

    The advancement in robotic field is enhanced rigorously in the past Few decades. Robots are being used in different fields of science as well as warfare. The research shows that in the near future, robots would be able to serve in fighting wars. Different countries and their armies have already deployed several military robots. However, there exist some drawbacks of robots like their inefficiency and inability to work under abnormal conditions. Ascent of artificial intelligence may resolve this issue in the coming future. The main focus of this paper is to provide a low cost and long range most efficient mechanical as well as software design of an Eye Robot. Using a blend of robotics and image processing with an addition of artificial intelligence path navigation techniques, this project is designed and implemented by controlling the robot (including robotic arm and camera) through a 2.4 GHz RF module manually. Autonomous function of the robot includes navigation based on the path assigned to the robot. The path is drawn on a VB based application and then transferred to the robot wirelessly or through serial port. A Wi-Fi based Optical Character Recognition (OCR) implemented video streaming can also be observed at remote devices like laptops. (author)

  11. Interaction dynamics of multiple mobile robots with simple navigation strategies

    Science.gov (United States)

    Wang, P. K. C.

    1989-01-01

    The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.

  12. Development of a force-reflecting robotic platform for cardiac catheter navigation.

    Science.gov (United States)

    Park, Jun Woo; Choi, Jaesoon; Pak, Hui-Nam; Song, Seung Joon; Lee, Jung Chan; Park, Yongdoo; Shin, Seung Min; Sun, Kyung

    2010-11-01

    Electrophysiological catheters are used for both diagnostics and clinical intervention. To facilitate more accurate and precise catheter navigation, robotic cardiac catheter navigation systems have been developed and commercialized. The authors have developed a novel force-reflecting robotic catheter navigation system. The system is a network-based master-slave configuration having a 3-degree of freedom robotic manipulator for operation with a conventional cardiac ablation catheter. The master manipulator implements a haptic user interface device with force feedback using a force or torque signal either measured with a sensor or estimated from the motor current signal in the slave manipulator. The slave manipulator is a robotic motion control platform on which the cardiac ablation catheter is mounted. The catheter motions-forward and backward movements, rolling, and catheter tip bending-are controlled by electromechanical actuators located in the slave manipulator. The control software runs on a real-time operating system-based workstation and implements the master/slave motion synchronization control of the robot system. The master/slave motion synchronization response was assessed with step, sinusoidal, and arbitrarily varying motion commands, and showed satisfactory performance with insignificant steady-state motion error. The current system successfully implemented the motion control function and will undergo safety and performance evaluation by means of animal experiments. Further studies on the force feedback control algorithm and on an active motion catheter with an embedded actuation mechanism are underway. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  13. Behaviour based Mobile Robot Navigation Technique using AI System: Experimental Investigation on Active Media Pioneer Robot

    Directory of Open Access Journals (Sweden)

    S. Parasuraman, V.Ganapathy

    2012-10-01

    Full Text Available A key issue in the research of an autonomous robot is the design and development of the navigation technique that enables the robot to navigate in a real world environment. In this research, the issues investigated and methodologies established include (a Designing of the individual behavior and behavior rule selection using Alpha level fuzzy logic system  (b Designing of the controller, which maps the sensors input to the motor output through model based Fuzzy Logic Inference System and (c Formulation of the decision-making process by using Alpha-level fuzzy logic system. The proposed method is applied to Active Media Pioneer Robot and the results are discussed and compared with most accepted methods. This approach provides a formal methodology for representing and implementing the human expert heuristic knowledge and perception-based action in mobile robot navigation. In this approach, the operational strategies of the human expert driver are transferred via fuzzy logic to the robot navigation in the form of a set of simple conditional statements composed of linguistic variables.Keywards: Mobile robot, behavior based control, fuzzy logic, alpha level fuzzy logic, obstacle avoidance behavior and goal seek behavior

  14. Intelligent navigation and accurate positioning of an assist robot in indoor environments

    Science.gov (United States)

    Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke

    2017-12-01

    Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.

  15. Memristive device based learning for navigation in robots.

    Science.gov (United States)

    Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A

    2017-11-08

    Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.

  16. Real Time Mapping and Dynamic Navigation for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Maki K. Habib

    2008-11-01

    Full Text Available This paper discusses the importance, the complexity and the challenges of mapping mobile robot?s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.

  17. Exploration and Navigation for Mobile Robots With Perceptual Limitations

    Directory of Open Access Journals (Sweden)

    Leonardo Romero

    2006-09-01

    Full Text Available To learn a map of an environment a mobile robot has to explore its workspace using its sensors. Sensors are noisy and have perceptual limitations that must be considered while learning a map. This paper considers a mobile robot with sensor perceptual limitations and introduces a new method for exploring and navigating autonomously in indoor environments. To minimize the risk of collisions as well as to not exceed the range of sensors, we introduce the concept of a travel space as a way to associate costs to grid cells of the map, based on distances to obstacles. During exploration the mobile robot minimizes its movements, including rotations, to reach the nearest unexplored region of the environment, using a dynamic programming algorithm. Once the exploration ends, the travel space is used to form a roadmap, a net of safe roads that the mobile robot can use for navigation. These exploration and navigation method are tested using a simulated and a real mobile robot with promising results.

  18. Exploration and Navigation for Mobile Robots With Perceptual Limitations

    Directory of Open Access Journals (Sweden)

    Eduardo F. Morales

    2008-11-01

    Full Text Available To learn a map of an environment a mobile robot has to explore its workspace using its sensors. Sensors are noisy and have perceptual limitations that must be considered while learning a map. This paper considers a mobile robot with sensor perceptual limitations and introduces a new method for exploring and navigating autonomously in indoor environments. To minimize the risk of collisions as well as to not exceed the range of sensors, we introduce the concept of a travel space as a way to associate costs to grid cells of the map, based on distances to obstacles. During exploration the mobile robot minimizes its movements, including rotations, to reach the nearest unexplored region of the environment, using a dynamic programming algorithm. Once the exploration ends, the travel space is used to form a roadmap, a net of safe roads that the mobile robot can use for navigation. These exploration and navigation method are tested using a simulated and a real mobile robot with promising results.

  19. Navigation of robotic system using cricket motes

    Science.gov (United States)

    Patil, Yogendra J.; Baine, Nicholas A.; Rattan, Kuldip S.

    2011-06-01

    This paper presents a novel algorithm for self-mapping of the cricket motes that can be used for indoor navigation of autonomous robotic systems. The cricket system is a wireless sensor network that can provide indoor localization service to its user via acoustic ranging techniques. The behavior of the ultrasonic transducer on the cricket mote is studied and the regions where satisfactorily distance measurements can be obtained are recorded. Placing the motes in these regions results fine-grain mapping of the cricket motes. Trilateration is used to obtain a rigid coordinate system, but is insufficient if the network is to be used for navigation. A modified SLAM algorithm is applied to overcome the shortcomings of trilateration. Finally, the self-mapped cricket motes can be used for navigation of autonomous robotic systems in an indoor location.

  20. Navigation and Robotics in Spinal Surgery: Where Are We Now?

    Science.gov (United States)

    Overley, Samuel C; Cho, Samuel K; Mehta, Ankit I; Arnold, Paul M

    2017-03-01

    Spine surgery has experienced much technological innovation over the past several decades. The field has seen advancements in operative techniques, implants and biologics, and equipment such as computer-assisted navigation and surgical robotics. With the arrival of real-time image guidance and navigation capabilities along with the computing ability to process and reconstruct these data into an interactive three-dimensional spinal "map", so too have the applications of surgical robotic technology. While spinal robotics and navigation represent promising potential for improving modern spinal surgery, it remains paramount to demonstrate its superiority as compared to traditional techniques prior to assimilation of its use amongst surgeons.The applications for intraoperative navigation and image-guided robotics have expanded to surgical resection of spinal column and intradural tumors, revision procedures on arthrodesed spines, and deformity cases with distorted anatomy. Additionally, these platforms may mitigate much of the harmful radiation exposure in minimally invasive surgery to which the patient, surgeon, and ancillary operating room staff are subjected.Spine surgery relies upon meticulous fine motor skills to manipulate neural elements and a steady hand while doing so, often exploiting small working corridors utilizing exposures that minimize collateral damage. Additionally, the procedures may be long and arduous, predisposing the surgeon to both mental and physical fatigue. In light of these characteristics, spine surgery may actually be an ideal candidate for the integration of navigation and robotic-assisted procedures.With this paper, we aim to critically evaluate the current literature and explore the options available for intraoperative navigation and robotic-assisted spine surgery. Copyright © 2016 by the Congress of Neurological Surgeons.

  1. A biologically inspired meta-control navigation system for the Psikharpax rat robot

    International Nuclear Information System (INIS)

    Caluwaerts, K; Staffa, M; N’Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M

    2012-01-01

    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment—recognized as new contexts—and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics. (paper)

  2. Image Based Solution to Occlusion Problem for Multiple Robots Navigation

    Directory of Open Access Journals (Sweden)

    Taj Mohammad Khan

    2012-04-01

    Full Text Available In machine vision, occlusions problem is always a challenging issue in image based mapping and navigation tasks. This paper presents a multiple view vision based algorithm for the development of occlusion-free map of the indoor environment. The map is assumed to be utilized by the mobile robots within the workspace. It has wide range of applications, including mobile robot path planning and navigation, access control in restricted areas, and surveillance systems. We used wall mounted fixed camera system. After intensity adjustment and background subtraction of the synchronously captured images, the image registration was performed. We applied our algorithm on the registered images to resolve the occlusion problem. This technique works well even in the existence of total occlusion for a longer period.

  3. A New Classification Technique in Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Bambang Tutuko

    2011-12-01

    Full Text Available This paper presents a novel pattern recognition algorithm that use weightless neural network (WNNs technique.This technique plays a role of situation classifier to judge the situation around the mobile robot environment and makes control decision in mobile robot navigation. The WNNs technique is choosen due to significant advantages over conventional neural network, such as they can be easily implemented in hardware using standard RAM, faster in training phase and work with small resources. Using a simple classification algorithm, the similar data will be grouped with each other and it will be possible to attach similar data classes to specific local areas in the mobile robot environment. This strategy is demonstrated in simple mobile robot powered by low cost microcontrollers with 512 bytes of RAM and low cost sensors. Experimental result shows, when number of neuron increases the average environmental recognition ratehas risen from 87.6% to 98.5%.The WNNs technique allows the mobile robot to recognize many and different environmental patterns and avoid obstacles in real time. Moreover, by using proposed WNNstechnique mobile robot has successfully reached the goal in dynamic environment compare to fuzzy logic technique and logic function, capable of dealing with uncertainty in sensor reading, achieving good performance in performing control actions with 0.56% error rate in mobile robot speed.

  4. Performance comparison of novel WNN approach with RBFNN in navigation of autonomous mobile robotic agent

    Directory of Open Access Journals (Sweden)

    Ghosh Saradindu

    2016-01-01

    Full Text Available This paper addresses the performance comparison of Radial Basis Function Neural Network (RBFNN with novel Wavelet Neural Network (WNN of designing intelligent controllers for path planning of mobile robot in an unknown environment. In the proposed WNN, different types of activation functions such as Mexican Hat, Gaussian and Morlet wavelet functions are used in the hidden nodes. The neural networks are trained by an intelligent supervised learning technique so that the robot makes a collision-free path in the unknown environment during navigation from different starting points to targets/goals. The efficiency of two algorithms is compared using some MATLAB simulations and experimental setup with Arduino Mega 2560 microcontroller in terms of path length and time taken to reach the target as an indicator for the accuracy of the network models.

  5. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2005-09-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  6. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  7. An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas

    Directory of Open Access Journals (Sweden)

    David Zapata

    2013-01-01

    Full Text Available There are many outdoor robotic applications where a robot must reach a goal position or explore an area without previous knowledge of the environment around it. Additionally, other applications (like path planning require the use of known maps or previous information of the environment. This work presents a system composed by a terrestrial and an aerial robot that cooperate and share sensor information in order to address those requirements. The ground robot is able to navigate in an unknown large environment aided by visual feedback from a camera on board the aerial robot. At the same time, the obstacles are mapped in real-time by putting together the information from the camera and the positioning system of the ground robot. A set of experiments were carried out with the purpose of verifying the system applicability. The experiments were performed in a simulation environment and outdoor with a medium-sized ground robot and a mini quad-rotor. The proposed robotic system shows outstanding results in simultaneous navigation and mapping applications in large outdoor environments.

  8. Integrated navigation of aerial robot for GPS and GPS-denied environment

    International Nuclear Information System (INIS)

    Suzuki, Satoshi; Min, Hongkyu; Nonami, Kenzo; Wada, Tetsuya

    2016-01-01

    In this study, novel robust navigation system for aerial robot in GPS and GPS- denied environments is proposed. Generally, the aerial robot uses position and velocity information from Global Positioning System (GPS) for guidance and control. However, GPS could not be used in several environments, for example, GPS has huge error near buildings and trees, indoor, and so on. In such GPS-denied environment, Laser Detection and Ranging (LIDER) sensor based navigation system have generally been used. However, LIDER sensor also has an weakness, and it could not be used in the open outdoor environment where GPS could be used. Therefore, it is desired to develop the integrated navigation system which is seamlessly applied to GPS and GPS-denied environments. In this paper, the integrated navigation system for aerial robot using GPS and LIDER is developed. The navigation system is designed based on Extended Kalman Filter, and the effectiveness of the developed system is verified by numerical simulation and experiment. (paper)

  9. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  10. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  11. Mobile robot navigation in unknown static environments using ANFIS controller

    Directory of Open Access Journals (Sweden)

    Anish Pandey

    2016-09-01

    Full Text Available Navigation and obstacle avoidance are the most important task for any mobile robots. This article presents the Adaptive Neuro-Fuzzy Inference System (ANFIS controller for mobile robot navigation and obstacle avoidance in the unknown static environments. The different sensors such as ultrasonic range finder sensor and sharp infrared range sensor are used to detect the forward obstacles in the environments. The inputs of the ANFIS controller are obstacle distances obtained from the sensors, and the controller output is a robot steering angle. The primary objective of the present work is to use ANFIS controller to guide the mobile robot in the given environments. Computer simulations are conducted through MATLAB software and implemented in real time by using C/C++ language running Arduino microcontroller based mobile robot. Moreover, the successful experimental results on the actual mobile robot demonstrate the effectiveness and efficiency of the proposed controller.

  12. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  13. Multi-Sensor SLAM Approach for Robot Navigation

    Directory of Open Access Journals (Sweden)

    Sid Ahmed BERRABAH

    2010-12-01

    Full Text Available o be able to operate and act successfully, the robot needs to know at any time where it is. This means the robot has to find out its location relative to the environment. This contribution introduces the increase of accuracy of mobile robot positioning in large outdoor environments based on data fusion from different sensors: camera, GPS, inertial navigation system (INS, and wheel encoders. The fusion is done in a Simultaneous Localization and Mapping (SLAM approach. The paper gives an overview on the proposed algorithm and discusses the obtained results.

  14. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  15. Bio-robots automatic navigation with graded electric reward stimulation based on Reinforcement Learning.

    Science.gov (United States)

    Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.

  16. Neurobiologically inspired mobile robot navigation and planning

    Directory of Open Access Journals (Sweden)

    Mathias Quoy

    2007-11-01

    Full Text Available After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type “transition cells” that encompasses traditional “place cells”.

  17. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots.

    Science.gov (United States)

    Sherwin, Tyrone; Easte, Mikala; Chen, Andrew Tzer-Yeu; Wang, Kevin I-Kai; Dai, Wenbin

    2018-02-14

    Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.

  18. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots

    Directory of Open Access Journals (Sweden)

    Tyrone Sherwin

    2018-02-01

    Full Text Available Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.

  19. Navigation Algorithm Using Fuzzy Control Method in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Cviklovič Vladimír

    2016-03-01

    Full Text Available The issue of navigation methods is being continuously developed globally. The aim of this article is to test the fuzzy control algorithm for track finding in mobile robotics. The concept of an autonomous mobile robot EN20 has been designed to test its behaviour. The odometry navigation method was used. The benefits of fuzzy control are in the evidence of mobile robot’s behaviour. These benefits are obtained when more physical variables on the base of more input variables are controlled at the same time. In our case, there are two input variables - heading angle and distance, and two output variables - the angular velocity of the left and right wheel. The autonomous mobile robot is moving with human logic.

  20. AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation

    Directory of Open Access Journals (Sweden)

    Xin Yuan

    2017-05-01

    Full Text Available In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure.

  1. Evolutionary programming-based univector field navigation method for past mobile robots.

    Science.gov (United States)

    Kim, Y J; Kim, J H; Kwon, D S

    2001-01-01

    Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.

  2. Autonomous navigation system for mobile robots of inspection

    International Nuclear Information System (INIS)

    Angulo S, P.; Segovia de los Rios, A.

    2005-01-01

    One of the goals in robotics is the human personnel's protection that work in dangerous areas or of difficult access, such it is the case of the nuclear industry where exist areas that, for their own nature, they are inaccessible for the human personnel, such as areas with high radiation level or high temperatures; it is in these cases where it is indispensable the use of an inspection system that is able to carry out a sampling of the area in order to determine if this areas can be accessible for the human personnel. In this situation it is possible to use an inspection system based on a mobile robot, of preference of autonomous navigation, for the realization of such inspection avoiding by this way the human personnel's exposure. The present work proposes a model of autonomous navigation for a mobile robot Pioneer 2-D Xe based on the algorithm of wall following using the paradigm of fuzzy logic. (Author)

  3. Mobile Robot Navigation Based on Q-Learning Technique

    Directory of Open Access Journals (Sweden)

    Lazhar Khriji

    2011-03-01

    Full Text Available This paper shows how Q-learning approach can be used in a successful way to deal with the problem of mobile robot navigation. In real situations where a large number of obstacles are involved, normal Q-learning approach would encounter two major problems due to excessively large state space. First, learning the Q-values in tabular form may be infeasible because of the excessive amount of memory needed to store the table. Second, rewards in the state space may be so sparse that with random exploration they will only be discovered extremely slowly. In this paper, we propose a navigation approach for mobile robot, in which the prior knowledge is used within Q-learning. We address the issue of individual behavior design using fuzzy logic. The strategy of behaviors based navigation reduces the complexity of the navigation problem by dividing them in small actions easier for design and implementation. The Q-Learning algorithm is applied to coordinate between these behaviors, which make a great reduction in learning convergence times. Simulation and experimental results confirm the convergence to the desired results in terms of saved time and computational resources.

  4. Integrated navigation and control software system for MRI-guided robotic prostate interventions.

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S; DiMaio, Simon P; Gobbi, David G; Csoma, Csaba; Mewes, Philip W; Fichtinger, Gabor; Tempany, Clare M; Hata, Nobuhiko

    2010-01-01

    A software system to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and the open-source navigation software are connected together via Ethernet to exchange commands, coordinates, and images using an open network communication protocol, OpenIGTLink. The system has six states called "workphases" that provide the necessary synchronization of all components during each stage of the clinical workflow, and the user interface guides the operator linearly through these workphases. On top of this framework, the software provides the following features for needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MR images of needle trajectories in the prostate. These features are supported by calibration of robot and image coordinates by fiducial-based registration. Performance tests show that the registration error of the system was 2.6mm within the prostate volume. Registered real-time 2D images were displayed 1.97 s after the image location is specified. Copyright 2009 Elsevier Ltd. All rights reserved.

  5. Integrated navigation and control software system for MRI-guided robotic prostate interventions

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S.; DiMaio, Simon P.; Gobbi, David G.; Csoma, Csaba; Mewes, Philip W.; Fichtinger, Gabor; Tempany, Clare M.; Hata, Nobuhiko

    2010-01-01

    A software system to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and the open-source navigation software are connected together via Ethernet to exchange commands, coordinates, and images using an open network communication protocol, OpenIGTLink. The system has six states called “workphases” that provide the necessary synchronization of all components during each stage of the clinical workflow, and the user interface guides the operator linearly through these workphases. On top of this framework, the software provides the following features for needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MR images of needle trajectories in the prostate. These features are supported by calibration of robot and image coordinates by fiducial-based registration. Performance tests show that the registration error of the system was 2.6 mm within the prostate volume. Registered real-time 2D images were displayed 1.97 s after the image location is specified. PMID:19699057

  6. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    Science.gov (United States)

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  7. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images

    Directory of Open Access Journals (Sweden)

    Lingyan Ran

    2017-06-01

    Full Text Available Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN, trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  8. Empirical evaluation of a practical indoor mobile robot navigation method using hybrid maps

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Fan, Zhun; Xiao, Jizhong

    2010-01-01

    This video presents a practical navigation scheme for indoor mobile robots using hybrid maps. The method makes use of metric maps for local navigation and a topological map for global path planning. Metric maps are generated as occupancy grids by a laser range finder to represent local information...... about partial areas. The global topological map is used to indicate the connectivity of the ‘places-of-interests’ in the environment and the interconnectivity of the local maps. Visual tags on the ceiling to be detected by the robot provide valuable information and contribute to reliable localization...... that the method is implemented successfully on physical robot in a hospital environment, which provides a practical solution for indoor navigation....

  9. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    Science.gov (United States)

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  10. Neural Network Based Reactive Navigation for Mobile Robot in Dynamic Environment

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, S.; Ripel, T.

    2013-01-01

    Roč. 198, č. 2013 (2013), s. 108-113 ISSN 1012-0394 Institutional research plan: CEZ:AV0Z20760514 Institutional support: RVO:61388998 Keywords : mobile robot * reactive navigation * artificial neural networks Subject RIV: JD - Computer Applications, Robotics

  11. UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.

    Science.gov (United States)

    Chen, Jessie Y C

    2010-08-01

    A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.

  12. FroboMind, proposing a conceptual architecture for agricultural field robot navigation

    DEFF Research Database (Denmark)

    Jensen, Kjeld; Bøgild, Anders; Nielsen, Søren Hundevadt

    2011-01-01

    The aim of this work is to propose a conceptual system architecture Field Robot Cognitive System Architecture (FroboMind). which can provide the flexibility and extend ability required for further research and development within cognition based navigation of plant nursing robots....

  13. Image-based navigation for a robotized flexible endoscope

    NARCIS (Netherlands)

    van der Stap, N.; Slump, Cornelis H.; Broeders, Ivo Adriaan Maria Johannes; van der Heijden, Ferdinand; Luo, Xiongbiao; Reichl, Tobias; Mirota, Daniel; Soper, Timothy

    2014-01-01

    Robotizing flexible endoscopy enables image-based control of endoscopes. Especially during high-throughput procedures, such as a colonoscopy, navigation support algorithms could improve procedure turnaround and ergonomics for the endoscopist. In this study, we have developed and implemented a

  14. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Science.gov (United States)

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  15. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2011-02-01

    Full Text Available Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN. Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  16. Particle Filter for Fault Diagnosis and Robust Navigation of Underwater Robot

    DEFF Research Database (Denmark)

    Zhao, Bo; Skjetne, Roger; Blanke, Mogens

    2014-01-01

    A particle filter based robust navigation with fault diagnosis is designed for an underwater robot, where 10 failure modes of sensors and thrusters are considered. The nominal underwater robot and its anomaly are described by a switchingmode hidden Markov model. By extensively running a particle...... filter on the model, the fault diagnosis and robust navigation are achieved. Closed-loop full-scale experimental results show that the proposed method is robust, can diagnose faults effectively, and can provide good state estimation even in cases where multiple faults occur. Comparing with other methods...

  17. ROBERT autonomous navigation robot with artificial vision

    International Nuclear Information System (INIS)

    Cipollini, A.; Meo, G.B.; Nanni, V.; Rossi, L.; Taraglio, S.; Ferjancic, C.

    1993-01-01

    This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail

  18. Dynamic Parameter Update for Robot Navigation Systems through Unsupervised Environmental Situational Analysis

    OpenAIRE

    Shantia, Amirhossein; Bidoia, Francesco; Schomaker, Lambert; Wiering, Marco

    2017-01-01

    A robot’s local navigation is often done through forward simulation of robot velocities and measuring the possible trajectories against safety, distance to the final goal and the generated path of a global path planner. Then, the computed velocities vector for the winning trajectory is executed on the robot. This process is done continuously through the whole navigation process and requires an extensive amount of processing. This only allows for a very limited sampling space. In this paper, w...

  19. Percutaneous Sacroiliac Screw Placement: A Prospective Randomized Comparison of Robot-assisted Navigation Procedures with a Conventional Technique

    Directory of Open Access Journals (Sweden)

    Jun-Qiang Wang

    2017-01-01

    Conclusions: Accuracy of the robot-assisted technique was superior to that of the freehand technique. Robot-assisted navigation is safe for unstable posterior pelvic ring stabilization, especially in S1, but also in S2. SI screw insertion with robot-assisted navigation is clinically feasible.

  20. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    Directory of Open Access Journals (Sweden)

    Xingguang Duan

    2018-01-01

    Full Text Available In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.

  1. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    Science.gov (United States)

    Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948

  2. Real-Time Motion Planning and Safe Navigation in Dynamic Multi-Robot Environments

    National Research Council Canada - National Science Library

    Bruce, James R

    2006-01-01

    .... While motion planning has been used for high level robot navigation, or limited to semi-static or single-robot domains, it has often been dismissed for the real-time low-level control of agents due...

  3. Maps managing interface design for a mobile robot navigation governed by a BCI

    International Nuclear Information System (INIS)

    Auat Cheein, Fernando A; Carelli, Ricardo; Celeste, Wanderley Cardoso; Freire Bastos, Teodiano; Di Sciascio, Fernando

    2007-01-01

    In this paper, a maps managing interface is proposed. This interface is governed by a Brain Computer Interface (BCI), which also governs a mobile robot's movements. If a robot is inside a known environment, the user can load a map from the maps managing interface in order to navigate it. Otherwise, if the robot is in an unknown environment, a Simultaneous Localization and Mapping (SLAM) algorithm is released in order to obtain a probabilistic grid map of that environment. Then, that map is loaded into the map database for future navigations. While slamming, the user has a direct control of the robot's movements via the BCI. The complete system is applied to a mobile robot and can be also applied to an autonomous wheelchair, which has the same kinematics. Experimental results are also shown

  4. Image-Based Particle Filtering For Robot Navigation In A Maize Field

    NARCIS (Netherlands)

    Hiremath, S.; Evert, van F.K.; Heijden, van der G.W.A.M.; Braak, ter C.J.F.; Stein, A.

    2012-01-01

    Autonomous navigation of a robot in an agricultural field is a challenge as the robot is in an environment with many sources of noise. This includes noise due to uneven terrain, varying shapes, sizes and colors of the plants, imprecise sensor measurements and effects due to wheel-slippage. The

  5. Reactive, Safe Navigation for Lunar and Planetary Robots

    Science.gov (United States)

    Utz, Hans; Ruland, Thomas

    2008-01-01

    When humans return to the moon, Astronauts will be accompanied by robotic helpers. Enabling robots to safely operate near astronauts on the lunar surface has the potential to significantly improve the efficiency of crew surface operations. Safely operating robots in close proximity to astronauts on the lunar surface requires reactive obstacle avoidance capabilities not available on existing planetary robots. In this paper we present work on safe, reactive navigation using a stereo based high-speed terrain analysis and obstacle avoidance system. Advances in the design of the algorithms allow it to run terrain analysis and obstacle avoidance algorithms at full frame rate (30Hz) on off the shelf hardware. The results of this analysis are fed into a fast, reactive path selection module, enforcing the safety of the chosen actions. The key components of the system are discussed and test results are presented.

  6. Maps managing interface design for a mobile robot navigation governed by a BCI

    Energy Technology Data Exchange (ETDEWEB)

    Auat Cheein, Fernando A [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina); Carelli, Ricardo [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina); Celeste, Wanderley Cardoso [Electrical Engineering Department, Federal University of Espirito Santo. Fernando Ferrari, 514 29075-910 Vitoria-ES (Brazil); Freire Bastos, Teodiano [Electrical Engineering Department, Federal University of Espirito Santo. Fernando Ferrari, 514 29075-910 Vitoria-ES (Brazil); Di Sciascio, Fernando [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina)

    2007-11-15

    In this paper, a maps managing interface is proposed. This interface is governed by a Brain Computer Interface (BCI), which also governs a mobile robot's movements. If a robot is inside a known environment, the user can load a map from the maps managing interface in order to navigate it. Otherwise, if the robot is in an unknown environment, a Simultaneous Localization and Mapping (SLAM) algorithm is released in order to obtain a probabilistic grid map of that environment. Then, that map is loaded into the map database for future navigations. While slamming, the user has a direct control of the robot's movements via the BCI. The complete system is applied to a mobile robot and can be also applied to an autonomous wheelchair, which has the same kinematics. Experimental results are also shown.

  7. An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation

    Directory of Open Access Journals (Sweden)

    Kun Xie

    2018-03-01

    Full Text Available There are many tasks that require clear and easily recognizable images in the field of underwater robotics and marine science, such as underwater target detection and identification of robot navigation and obstacle avoidance. However, water turbidity makes the underwater image quality too low to recognize. This paper proposes the use of the dark channel prior model for underwater environment recognition, in which underwater reflection models are used to obtain enhanced images. The proposed approach achieves very good performance and multi-scene robustness by combining the dark channel prior model with the underwater diffuse model. The experimental results are given to show the effectiveness of the dark channel prior model in underwater scenarios.

  8. Volunteers Oriented Interface Design for the Remote Navigation of Rescue Robots at Large-Scale Disaster Sites

    Science.gov (United States)

    Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi

    This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.

  9. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Science.gov (United States)

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  10. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Marwah Almasri

    2015-12-01

    Full Text Available Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  11. Posture estimation for autonomous weeding robots navigation in nursery tree plantations

    DEFF Research Database (Denmark)

    Khot, Law Ramchandra; Tang, Lie; Blackmore, Simon

    2005-01-01

    errors of the system, in x and y direction for all the four lines. Further, it could also be stated that the errors were observed more in the direction of travel of the robot. When robot was navigated through the poles, the positioning accuracy of the system increased after filtering. The accuracy...

  12. Path Planning and Navigation for Mobile Robots in a Hybrid Sensor Network without Prior Location Information

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-03-01

    Full Text Available In a hybrid wireless sensor network with mobile and static nodes, which have no prior geographical knowledge, successful navigation for mobile robots is one of the main challenges. In this paper, we propose two novel navigation algorithms for outdoor environments, which permit robots to travel from one static node to another along a planned path in the sensor field, namely the RAC and the IMAP algorithms. Using this, the robot can navigate without the help of a map, GPS or extra sensor modules, only using the received signal strength indication (RSSI and odometry. Therefore, our algorithms have the advantage of being cost-effective. In addition, a path planning algorithm to schedule mobile robots' travelling paths is presented, which focuses on shorter distances and robust paths for robots by considering the RSSI-Distance characteristics. The simulations and experiments conducted with an autonomous mobile robot show the effectiveness of the proposed algorithms in an outdoor environment.

  13. Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Li Wang

    2018-01-01

    Full Text Available In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the “side” recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera. Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework.

  14. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    Science.gov (United States)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  15. Autonomous navigation system for mobile robots of inspection; Sistema de navegacion autonoma para robots moviles de inspeccion

    Energy Technology Data Exchange (ETDEWEB)

    Angulo S, P. [ITT, Metepec, Estado de Mexico (Mexico); Segovia de los Rios, A. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: pedrynteam@hotmail.com

    2005-07-01

    One of the goals in robotics is the human personnel's protection that work in dangerous areas or of difficult access, such it is the case of the nuclear industry where exist areas that, for their own nature, they are inaccessible for the human personnel, such as areas with high radiation level or high temperatures; it is in these cases where it is indispensable the use of an inspection system that is able to carry out a sampling of the area in order to determine if this areas can be accessible for the human personnel. In this situation it is possible to use an inspection system based on a mobile robot, of preference of autonomous navigation, for the realization of such inspection avoiding by this way the human personnel's exposure. The present work proposes a model of autonomous navigation for a mobile robot Pioneer 2-D Xe based on the algorithm of wall following using the paradigm of fuzzy logic. (Author)

  16. Outer navigation of a inspection robot by means of feedback of global guidance

    International Nuclear Information System (INIS)

    Segovia de los R, A.; Bucio V, F.; Garduno G, M.

    2008-01-01

    The objective of this article is the presentation of an inspection system to mobile robot navigating in exteriors by means of the employment of a feedback of instantaneous guidance with respect to a global reference throughout moment of the displacement. The robot evolves obeying the commands coming from the one tele operator which indicates the diverse addresses by means of the operation console that the robot should take using for it information provided by an electronic compass. The mobile robot employee in the experimentations is a Pioneer 3-AT, which counts with a sensor series required to obtain an operation of more autonomy. The electronic compass offers geographical information coded in a format SPI, reason for which a micro controller (μC) economic of general use has been an employee for to transfer the information to the format RS-232, originally used by the Pioneer 3-AT. The orientation information received by the robot by means of their serial port RS-232 secondary it is forwarded to the computer hostess in the one which a program Java is used to generate the commands for the robot navigation control and to deploy one graphic interface user utilized to receive the order of the operator. This research is part of an ambitious project in which it is tried to count on an inspection system and monitoring of sites in which risks of high radiation levels could exist, thus a navigation systems in exteriors could be very useful. The complete system will count besides the own sensors of the robot, with certain numbers of agree sensors to the variables that are desired to monitor. The resulting values of such measurements will be visualized in real time in the graphic interface user, thanks to a bidirectional wireless communication among the station of operation and the mobile robot. (Author)

  17. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  18. Investigation of human-robot interface performance in household environments

    Science.gov (United States)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  19. Model-base visual navigation of a mobile robot

    International Nuclear Information System (INIS)

    Roening, J.

    1992-08-01

    The thesis considers the problems of visual guidance of a mobile robot. A visual navigation system is formalized consisting of four basic components: world modelling, navigation sensing, navigation and action. According to this formalization an experimental system is designed and realized enabling real-world navigation experiments. A priori knowledge of the world is used for global path finding, aiding scene analysis and providing feedback information to the close the control loop between planned and actual movements. Two world models were developed. The first approach was a map-based model especially designed for low-level description of indoor environments. The other was a higher level and more symbolic representation of the surroundings utilizing the spatial graph concept. Two passive vision approaches were developed to extract navigation information. With passive three- camera stereovision a sparse depth map of the scene was produced. Another approach employed a fish-eye lens to map the entire scene of the surroundings without camera scanning. The local path planning of the system is supported by three-dimensional scene interpreter providing a partial understanding of scene contents. The interpreter consists of data-driven low-level stages and a model-driven high-level stage. Experiments were carried out in a simulator and test vehicle constructed in the laboratory. The test vehicle successfully navigated indoors

  20. Implementation of a Mobile Robot Platform Navigating in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Belaidi Hadjira

    2017-01-01

    Full Text Available Currently, problems of autonomous wheeled mobile robots in unknown environments are great challenge. Obstacle avoidance and path planning are the back bone of autonomous control as it makes robot able to reach its destination without collision. Dodging obstacles in dynamic and uncertain environment is the most complex part of obstacle avoidance and path planning tasks. This work deals with the implementation of an easy approach of static and dynamic obstacles avoidance. The robot starts by executing a free optimal path loaded into its controller; then, it uses its sensors to avoid the unexpected obstacles which may occur in that path during navigation.

  1. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  2. ANFIS -Based Navigation for HVAC Service Robot with Image Processing

    International Nuclear Information System (INIS)

    Salleh, Mohd Zoolfadli Md; Rashid, Nahrul Khair Alang Md; Mustafah, Yasir Mohd

    2013-01-01

    In this paper, we present an ongoing work on the autonomous navigation of a mobile service robot for Heat, Ventilation and Air Condition (HVAC) ducting. CCD camera mounted on the front-end of our robot is used to analyze the ducts openings (blob analysis) in order to differentiate them from other landmarks (blower fan, air outlets and etc). Distance between the robot and duct openings is measured using ultrasonic sensor. Controller chosen is ANFIS where its architecture accepts three inputs; recognition of duct openings, robot positions and distance while the outputs is maneuver direction (left or right).45 membership functions are created from which produces 46 training epochs. In order to demonstrate the functionality of the system, a working prototype is developed and tested inside HVAC ducting in ROBOCON Lab, IIUM

  3. Sensor fusion for mobile robot navigation

    International Nuclear Information System (INIS)

    Kam, M.; Zhu, X.; Kalata, P.

    1997-01-01

    The authors review techniques for sensor fusion in robot navigation, emphasizing algorithms for self-location. These find use when the sensor suite of a mobile robot comprises several different sensors, some complementary and some redundant. Integrating the sensor readings, the robot seeks to accomplish tasks such as constructing a map of its environment, locating itself in that map, and recognizing objects that should be avoided or sought. The review describes integration techniques in two categories: low-level fusion is used for direct integration of sensory data, resulting in parameter and state estimates; high-level fusion is used for indirect integration of sensory data in hierarchical architectures, through command arbitration and integration of control signals suggested by different modules. The review provides an arsenal of tools for addressing this (rather ill-posed) problem in machine intelligence, including Kalman filtering, rule-based techniques, behavior based algorithms and approaches that borrow from information theory, Dempster-Shafer reasoning, fuzzy logic and neural networks. It points to several further-research needs, including: robustness of decision rules; simultaneous consideration of self-location, motion planning, motion control and vehicle dynamics; the effect of sensor placement and attention focusing on sensor fusion; and adaptation of techniques from biological sensor fusion

  4. 3-D world modeling based on combinatorial geometry for autonomous robot navigation

    International Nuclear Information System (INIS)

    Goldstein, M.; Pin, F.G.; De Saussure, G.; Weisbin, C.R.

    1987-01-01

    In applications of robotics to surveillance and mapping at nuclear facilities the scene to be described is three-dimensional. Using range data a 3-D model of the environment can be built. First, each measured point on the object surface is surrounded by a solid sphere with a radius determined by the range to that point. Then the 3-D shapes of the visible surfaces are obtained by taking the (Boolean) union of the spheres. Using this representation distances to boundary surfaces can be efficiently calculated. This feature is particularly useful for navigation purposes. The efficiency of the proposed approach is illustrated by a simulation of a spherical robot navigating in a 3-D room with static obstacles

  5. Navigation system for robot-assisted intra-articular lower-limb fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Köhler, Paul; Morad, Samir; Atkins, Roger; Dogramadzi, Sanja

    2016-10-01

    In the surgical treatment for lower-leg intra-articular fractures, the fragments have to be positioned and aligned to reconstruct the fractured bone as precisely as possible, to allow the joint to function correctly again. Standard procedures use 2D radiographs to estimate the desired reduction position of bone fragments. However, optimal correction in a 3D space requires 3D imaging. This paper introduces a new navigation system that uses pre-operative planning based on 3D CT data and intra-operative 3D guidance to virtually reduce lower-limb intra-articular fractures. Physical reduction in the fractures is then performed by our robotic system based on the virtual reduction. 3D models of bone fragments are segmented from CT scan. Fragments are pre-operatively visualized on the screen and virtually manipulated by the surgeon through a dedicated GUI to achieve the virtual reduction in the fracture. Intra-operatively, the actual position of the bone fragments is provided by an optical tracker enabling real-time 3D guidance. The motion commands for the robot connected to the bone fragment are generated, and the fracture physically reduced based on the surgeon's virtual reduction. To test the system, four femur models were fractured to obtain four different distal femur fracture types. Each one of them was subsequently reduced 20 times by a surgeon using our system. The navigation system allowed an orthopaedic surgeon to virtually reduce the fracture with a maximum residual positioning error of [Formula: see text] (translational) and [Formula: see text] (rotational). Correspondent physical reductions resulted in an accuracy of 1.03 ± 0.2 mm and [Formula: see text], when the robot reduced the fracture. Experimental outcome demonstrates the accuracy and effectiveness of the proposed navigation system, presenting a fracture reduction accuracy of about 1 mm and [Formula: see text], and meeting the clinical requirements for distal femur fracture reduction procedures.

  6. Performance Improvement of Inertial Navigation System by Using Magnetometer with Vehicle Dynamic Constraints

    Directory of Open Access Journals (Sweden)

    Daehee Won

    2015-01-01

    Full Text Available A navigation algorithm is proposed to increase the inertial navigation performance of a ground vehicle using magnetic measurements and dynamic constraints. The navigation solutions are estimated based on inertial measurements such as acceleration and angular velocity measurements. To improve the inertial navigation performance, a three-axis magnetometer is used to provide the heading angle, and nonholonomic constraints (NHCs are introduced to increase the correlation between the velocity and the attitude equation. The NHCs provide a velocity feedback to the attitude, which makes the navigation solution more robust. Additionally, an acceleration-based roll and pitch estimation is applied to decrease the drift when the acceleration is within certain boundaries. The magnetometer and NHCs are combined with an extended Kalman filter. An experimental test was conducted to verify the proposed method, and a comprehensive analysis of the performance in terms of the position, velocity, and attitude showed that the navigation performance could be improved by using the magnetometer and NHCs. Moreover, the proposed method could improve the estimation performance for the position, velocity, and attitude without any additional hardware except an inertial sensor and magnetometer. Therefore, this method would be effective for ground vehicles, indoor navigation, mobile robots, vehicle navigation in urban canyons, or navigation in any global navigation satellite system-denied environment.

  7. Dynamic Mobile RobotNavigation Using Potential Field Based Immune Network

    Directory of Open Access Journals (Sweden)

    Guan-Chun Luh

    2007-04-01

    Full Text Available This paper proposes a potential filed immune network (PFIN for dynamic navigation of mobile robots in an unknown environment with moving obstacles and fixed/moving targets. The Velocity Obstacle method is utilized to determine imminent obstacle collision of a robot moving in the time-varying environment. The response of the overall immune network is derived by the aid of fuzzy system. Simulation results are presented to verify the effectiveness of the proposed methodology in unknown environments with single and multiple moving obstacles

  8. A PSO-Optimized Reciprocal Velocity Obstacles Algorithm for Navigation of Multiple Mobile Robots

    Directory of Open Access Journals (Sweden)

    Ziyad Allawi

    2015-03-01

    Full Text Available In this paper, a new optimization method for the Reciprocal Velocity Obstacles (RVO is proposed. It uses the well-known Particle Swarm Optimization (PSO for navigation control of multiple mobile robots with kinematic constraints. The RVO is used for collision avoidance between the robots, while PSO is used to choose the best path for the robot maneuver to avoid colliding with other robots and to get to its goal faster. This method was applied on 24 mobile robots facing each other. Simulation results have shown that this method outperforms the ordinary RVO when the path is heuristically chosen.

  9. People Detection Based on Spatial Mapping of Friendliness and Floor Boundary Points for a Mobile Navigation Robot

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Tasaki

    2011-01-01

    Full Text Available Navigation robots must single out partners requiring navigation and move in the cluttered environment where people walk around. Developing such robots requires two different people detections: detecting partners and detecting all moving people around the robots. For detecting partners, we design divided spaces based on the spatial relationships and sensing ranges. Mapping the friendliness of each divided space based on the stimulus from the multiple sensors to detect people calling robots positively, robots detect partners on the highest friendliness space. For detecting moving people, we regard objects’ floor boundary points in an omnidirectional image as obstacles. We classify obstacles as moving people by comparing movement of each point with robot movement using odometry data, dynamically changing thresholds to detect. Our robot detected 95.0% of partners while it stands by and interacts with people and detected 85.0% of moving people while robot moves, which was four times higher than previous methods did.

  10. Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots

    OpenAIRE

    Pérez Sala, Xavier

    2010-01-01

    We propose a robust system for automatic Robot Navigation in uncontrolled en- vironments. The system is composed by three main modules: the Arti cial Vision module, the Reinforcement Learning module, and the behavior control module. The aim of the system is to allow a robot to automatically nd a path that arrives to a pre xed goal. Turn and straight movements in uncontrolled environments are automatically estimated and controlled using the proposed modules. The Arti cial Vi...

  11. Learning probabilistic features for robotic navigation using laser sensors.

    Directory of Open Access Journals (Sweden)

    Fidel Aznar

    Full Text Available SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N to O(N(2, where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  12. Learning probabilistic features for robotic navigation using laser sensors.

    Science.gov (United States)

    Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  13. Navigation neuro-floue d'un robot mobile dans un environment ...

    African Journals Online (AJOL)

    Navigation neuro-floue d'un robot mobile dans un environment inconnu avec un apprentissage par renforcement. W Nouibat, Z A Foitih, F A Haouari. Abstract. No Abstract. Technologies Avancess Vol. 16 2003: pp. 19-30. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  14. Autonomous Wheeled Robot Platform Testbed for Navigation and Mapping Using Low-Cost Sensors

    Science.gov (United States)

    Calero, D.; Fernandez, E.; Parés, M. E.

    2017-11-01

    This paper presents the concept of an architecture for a wheeled robot system that helps researchers in the field of geomatics to speed up their daily research on kinematic geodesy, indoor navigation and indoor positioning fields. The presented ideas corresponds to an extensible and modular hardware and software system aimed at the development of new low-cost mapping algorithms as well as at the evaluation of the performance of sensors. The concept, already implemented in the CTTC's system ARAS (Autonomous Rover for Automatic Surveying) is generic and extensible. This means that it is possible to incorporate new navigation algorithms or sensors at no maintenance cost. Only the effort related to the development tasks required to either create such algorithms needs to be taken into account. As a consequence, change poses a much small problem for research activities in this specific area. This system includes several standalone sensors that may be combined in different ways to accomplish several goals; that is, this system may be used to perform a variety of tasks, as, for instance evaluates positioning algorithms performance or mapping algorithms performance.

  15. Nonparametric Online Learning Control for Soft Continuum Robot: An Enabling Technique for Effective Endoscopic Navigation

    Science.gov (United States)

    Lee, Kit-Hang; Fu, Denny K.C.; Leong, Martin C.W.; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong

    2017-01-01

    Abstract Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments. PMID:29251567

  16. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    Science.gov (United States)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the

  17. A Sensor Based Navigation Algorithm for a Mobile Robot using the DVFF Approach

    Directory of Open Access Journals (Sweden)

    A. OUALID DJEKOUNE

    2009-06-01

    Full Text Available Often autonomous mobile robots operate in environment for which prior maps are incomplete or inaccurate. They require the safe execution for a collision free motion to a goal position. This paper addresses a complete navigation method for a mobile robot that moves in unknown environment. Thus, a novel method called DVFF combining the Virtual Force Field (VFF obstacle avoidance approach and global path planning based on D* algorithm is proposed. While D* generates global path information towards a goal position, the VFF local controller generates the admissible trajectories that ensure safe robot motion. Results and analysis from a battery of experiments with this new method implemented on a ATRV2 mobile robot are shown.

  18. 2D navigation and pilotage of an autonomous mobile robot

    International Nuclear Information System (INIS)

    Favre, Patrick

    1989-01-01

    The contribution of this thesis deals with the navigation and the piloting of an autonomous robot, in a known or weakly known environment of dimension two without constraints. This leads to generate an optimal path to a given goal and then to compute the commands to follow this path. Several constraints are taken into account (obstacles, geometry and kinematic of the robot, dynamic effects). The first part defines the problem and presents the state of the art. The three following parts present a set of complementary solutions according to the knowledge level of the environment and to the space constraints: - Case of a known environment: generation and following of a trajectory with respect to given path points. - Case of a weakly known environment: coupling of a command module interacting with the environment perception, and a path planner. This allows a fast motion of the robot. - Case of a constrained environment: planner enabling the taking into account of many constraints as the robot's shape, turning radius limitation, backward motion and orientation. (author) [fr

  19. Enhancing fuzzy robot navigation systems by mimicking human visual perception of natural terrain traversibility

    Science.gov (United States)

    Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.

    2001-01-01

    This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.

  20. Dynamic Parameter Update for Robot Navigation Systems through Unsupervised Environmental Situational Analysis

    NARCIS (Netherlands)

    Shantia, Amirhossein; Bidoia, Francesco; Schomaker, Lambert; Wiering, Marco

    2017-01-01

    A robot’s local navigation is often done through forward simulation of robot velocities and measuring the possible trajectories against safety, distance to the final goal and the generated path of a global path planner. Then, the computed velocities vector for the winning trajectory is executed on

  1. Outer navigation of a inspection robot by means of feedback of global guidance; Navegacion exterior de un robot de inspeccion mediante retroalimentacion de la orientacion global

    Energy Technology Data Exchange (ETDEWEB)

    Segovia de los R, A.; Bucio V, F. [ININ, 52750 La Marquesa, Estado de Mexico (Mexico); Garduno G, M. [Instituto Tecnologico de Toluca, Av. Instituto Tecnologico s/n, Metepec, Estado de Mexico 52140 (Mexico)]. e-mail: asegovia@nuclear.inin.mx

    2008-07-01

    The objective of this article is the presentation of an inspection system to mobile robot navigating in exteriors by means of the employment of a feedback of instantaneous guidance with respect to a global reference throughout moment of the displacement. The robot evolves obeying the commands coming from the one tele operator which indicates the diverse addresses by means of the operation console that the robot should take using for it information provided by an electronic compass. The mobile robot employee in the experimentations is a Pioneer 3-AT, which counts with a sensor series required to obtain an operation of more autonomy. The electronic compass offers geographical information coded in a format SPI, reason for which a micro controller ({mu}C) economic of general use has been an employee for to transfer the information to the format RS-232, originally used by the Pioneer 3-AT. The orientation information received by the robot by means of their serial port RS-232 secondary it is forwarded to the computer hostess in the one which a program Java is used to generate the commands for the robot navigation control and to deploy one graphic interface user utilized to receive the order of the operator. This research is part of an ambitious project in which it is tried to count on an inspection system and monitoring of sites in which risks of high radiation levels could exist, thus a navigation systems in exteriors could be very useful. The complete system will count besides the own sensors of the robot, with certain numbers of agree sensors to the variables that are desired to monitor. The resulting values of such measurements will be visualized in real time in the graphic interface user, thanks to a bidirectional wireless communication among the station of operation and the mobile robot. (Author)

  2. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  3. An Integrated Assessment of Progress in Robotic Perception and Semantic Navigation

    Science.gov (United States)

    2015-09-01

    Navigation by Craig Lennon, Barry Bodt, Marshal Childers, Jean Oh, Arne Suppe, Luis Navarro-Serment, Robert Dean, Terrence Keegan , Chip Diberardino...Directorate, ARL Jean Oh, Arne Suppe, and Luis Navarro-Serment National Robotics Engineering Center, Pittsburgh, PA Robert Dean, Terrence Keegan ...AUTHOR(S) Craig Lennon, Barry Bodt, Marshal Childers, Jean Oh, Arne Suppe, Luis Navarro-Serment, Robert Dean, Terrence Keegan , Chip Diberardino

  4. Adaptive Control for Autonomous Navigation of Mobile Robots Considering Time Delay and Uncertainty

    Science.gov (United States)

    Armah, Stephen Kofi

    Autonomous control of mobile robots has attracted considerable attention of researchers in the areas of robotics and autonomous systems during the past decades. One of the goals in the field of mobile robotics is development of platforms that robustly operate in given, partially unknown, or unpredictable environments and offer desired services to humans. Autonomous mobile robots need to be equipped with effective, robust and/or adaptive, navigation control systems. In spite of enormous reported work on autonomous navigation control systems for mobile robots, achieving the goal above is still an open problem. Robustness and reliability of the controlled system can always be improved. The fundamental issues affecting the stability of the control systems include the undesired nonlinear effects introduced by actuator saturation, time delay in the controlled system, and uncertainty in the model. This research work develops robustly stabilizing control systems by investigating and addressing such nonlinear effects through analytical, simulations, and experiments. The control systems are designed to meet specified transient and steady-state specifications. The systems used for this research are ground (Dr Robot X80SV) and aerial (Parrot AR.Drone 2.0) mobile robots. Firstly, an effective autonomous navigation control system is developed for X80SV using logic control by combining 'go-to-goal', 'avoid-obstacle', and 'follow-wall' controllers. A MATLAB robot simulator is developed to implement this control algorithm and experiments are conducted in a typical office environment. The next stage of the research develops an autonomous position (x, y, and z) and attitude (roll, pitch, and yaw) controllers for a quadrotor, and PD-feedback control is used to achieve stabilization. The quadrotor's nonlinear dynamics and kinematics are implemented using MATLAB S-function to generate the state output. Secondly, the white-box and black-box approaches are used to obtain a linearized

  5. Autonomous Integrated Navigation for Indoor Robots Utilizing On-Line Iterated Extended Rauch-Tung-Striebel Smoothing

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2013-11-01

    Full Text Available In order to reduce the estimated errors of the inertial navigation system (INS/Wireless sensor network (WSN-integrated navigation for mobile robots indoors, this work proposes an on-line iterated extended Rauch-Tung-Striebel smoothing (IERTSS utilizing inertial measuring units (IMUs and an ultrasonic positioning system. In this mode, an iterated Extended Kalman filter (IEKF is used in forward data processing of the Extended Rauch-Tung-Striebel smoothing (ERTSS to improve the accuracy of the filtering output for the smoother. Furthermore, in order to achieve the on-line smoothing, IERTSS is embedded into the average filter. For verification, a real indoor test has been done to assess the performance of the proposed method. The results show that the proposed method is effective in reducing the errors compared with the conventional schemes.

  6. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  7. Hierarchical HMM based learning of navigation primitives for cooperative robotic endovascular catheterization.

    Science.gov (United States)

    Rafii-Tari, Hedyeh; Liu, Jindong; Payne, Christopher J; Bicknell, Colin; Yang, Guang-Zhong

    2014-01-01

    Despite increased use of remote-controlled steerable catheter navigation systems for endovascular intervention, most current designs are based on master configurations which tend to alter natural operator tool interactions. This introduces problems to both ergonomics and shared human-robot control. This paper proposes a novel cooperative robotic catheterization system based on learning-from-demonstration. By encoding the higher-level structure of a catheterization task as a sequence of primitive motions, we demonstrate how to achieve prospective learning for complex tasks whilst incorporating subject-specific variations. A hierarchical Hidden Markov Model is used to model each movement primitive as well as their sequential relationship. This model is applied to generation of motion sequences, recognition of operator input, and prediction of future movements for the robot. The framework is validated by comparing catheter tip motions against the manual approach, showing significant improvements in the quality of catheterization. The results motivate the design of collaborative robotic systems that are intuitive to use, while reducing the cognitive workload of the operator.

  8. Preliminary study on magnetic tracking-based planar shape sensing and navigation for flexible surgical robots in transoral surgery: methods and phantom experiments.

    Science.gov (United States)

    Song, Shuang; Zhang, Changchun; Liu, Li; Meng, Max Q-H

    2018-02-01

    Flexible surgical robot can work in confined and complex environments, which makes it a good option for minimally invasive surgery. In order to utilize flexible manipulators in complicated and constrained surgical environments, it is of great significance to monitor the position and shape of the curvilinear manipulator in real time during the procedures. In this paper, we propose a magnetic tracking-based planar shape sensing and navigation system for flexible surgical robots in the transoral surgery. The system can provide the real-time tip position and shape information of the robot during the operation. We use wire-driven flexible robot to serve as the manipulator. It has three degrees of freedom. A permanent magnet is mounted at the distal end of the robot. Its magnetic field can be sensed with a magnetic sensor array. Therefore, position and orientation of the tip can be estimated utilizing a tracking method. A shape sensing algorithm is then carried out to estimate the real-time shape based on the tip pose. With the tip pose and shape display in the 3D reconstructed CT model, navigation can be achieved. Using the proposed system, we carried out planar navigation experiments on a skull phantom to touch three different target positions under the navigation of the skull display interface. During the experiments, the real-time shape has been well monitored and distance errors between the robot tip and the targets in the skull have been recorded. The mean navigation error is [Formula: see text] mm, while the maximum error is 3.2 mm. The proposed method provides the advantages that no sensors are needed to mount on the robot and no line-of-sight problem. Experimental results verified the feasibility of the proposed method.

  9. Highly dexterous 2-module soft robot for intra-organ navigation in minimally invasive surgery.

    Science.gov (United States)

    Abidi, Haider; Gerboni, Giada; Brancadoro, Margherita; Fras, Jan; Diodato, Alessandro; Cianchetti, Matteo; Wurdemann, Helge; Althoefer, Kaspar; Menciassi, Arianna

    2018-02-01

    For some surgical interventions, like the Total Mesorectal Excision (TME), traditional laparoscopes lack the flexibility to safely maneuver and reach difficult surgical targets. This paper answers this need through designing, fabricating and modelling a highly dexterous 2-module soft robot for minimally invasive surgery (MIS). A soft robotic approach is proposed that uses flexible fluidic actuators (FFAs) allowing highly dexterous and inherently safe navigation. Dexterity is provided by an optimized design of fluid chambers within the robot modules. Safe physical interaction is ensured by fabricating the entire structure by soft and compliant elastomers, resulting in a squeezable 2-module robot. An inner free lumen/chamber along the central axis serves as a guide of flexible endoscopic tools. A constant curvature based inverse kinematics model is also proposed, providing insight into the robot capabilities. Experimental tests in a surgical scenario using a cadaver model are reported, demonstrating the robot advantages over standard systems in a realistic MIS environment. Simulations and experiments show the efficacy of the proposed soft robot. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Development of a Multi-functional Soft Robot (SNUMAX and Performance in RoboSoft Grand Challenge

    Directory of Open Access Journals (Sweden)

    Jun-Young Lee

    2016-10-01

    Full Text Available This paper introduces SNUMAX, the grand winner of the RoboSoft Grand Challenge. SNUMAX was built to complete all the tasks of the challenge. Completing these tasks required robotic compliant components that could adapt to variable situations and environments and generate enough stiffness to maintain performance. SNUMAX has three key components: transformable origami wheels, a polymer-based variable stiffness manipulator, and an adaptive caging gripper. This paper describes the design of these components and how they worked together to allow the robot to perform the contest’s navigation and manipulation tasks.

  11. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  12. Adaptive Landmark-Based Navigation System Using Learning Techniques

    DEFF Research Database (Denmark)

    Zeidan, Bassel; Dasgupta, Sakyasingha; Wörgötter, Florentin

    2014-01-01

    The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. In...... hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.......The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal....... Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex...

  13. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    Directory of Open Access Journals (Sweden)

    Emmanuele eTidoni

    2014-06-01

    Full Text Available Advancement in brain computer interfaces (BCI technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid’s walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI’s user and help in the feeling of control over it. Our results shed light on the possibility to increase robot’s control through the combination of multisensory feedback to a BCI user.

  14. Development of a self-navigating mobile interior robot application as a security guard/sentry

    International Nuclear Information System (INIS)

    Klarer, P.R.; Harrington, J.J.

    1986-07-01

    This paper describes a mobile robot system designed to function as part of an overall security system at a high security facility. The features of this robot system include specialized software and sensors for navigation without the need for external locator beacons or signposts, sensors for remote imaging and intruder detection, and the ability to communicate information either directly to the electronic portion of the security system or to a manned central control center. Other desirable features of the robot system include low weight, compact size, and low power consumption. The robot system can be operated either by remote manual control, or it can operate autonomously where direct human control can be limited to the global command level. The robot can act as a mobile remote sensing platform for alarm assessment or roving patrol, as a point sensor (sentry) in routine security applications, or as an exploratory device in situations potentially hazardous to humans. This robot system may also be used to ''walk-test'' intrusion detection sensors as part of a routine test and maintenance program for an interior intrusion detection system. The hardware, software, and operation of this robot system will be briefly described herein

  15. A spatial registration method for navigation system combining O-arm with spinal surgery robot

    Science.gov (United States)

    Bai, H.; Song, G. L.; Zhao, Y. W.; Liu, X. Z.; Jiang, Y. X.

    2018-05-01

    The minimally invasive surgery in spinal surgery has become increasingly popular in recent years as it reduces the chances of complications during post-operation. However, the procedure of spinal surgery is complicated and the surgical vision of minimally invasive surgery is limited. In order to increase the quality of percutaneous pedicle screw placement, the O-arm that is a mobile intraoperative imaging system is used to assist surgery. The robot navigation system combined with O-arm is also increasing, with the extensive use of O-arm. One of the major problems in the surgical navigation system is to associate the patient space with the intra-operation image space. This study proposes a spatial registration method of spinal surgical robot navigation system, which uses the O-arm to scan a calibration phantom with metal calibration spheres. First, the metal artifacts were reduced in the CT slices and then the circles in the images based on the moments invariant could be identified. Further, the position of the calibration sphere in the image space was obtained. Moreover, the registration matrix is obtained based on the ICP algorithm. Finally, the position error is calculated to verify the feasibility and accuracy of the registration method.

  16. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  17. Fuzzy Behavior Modulation with Threshold Activation for Autonomous Vehicle Navigation

    Science.gov (United States)

    Tunstel, Edward

    2000-01-01

    This paper describes fuzzy logic techniques used in a hierarchical behavior-based architecture for robot navigation. An architectural feature for threshold activation of fuzzy-behaviors is emphasized, which is potentially useful for tuning navigation performance in real world applications. The target application is autonomous local navigation of a small planetary rover. Threshold activation of low-level navigation behaviors is the primary focus. A preliminary assessment of its impact on local navigation performance is provided based on computer simulations.

  18. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  19. Expert robots in nuclear plants

    International Nuclear Information System (INIS)

    Byrd, J.S.; Fisher, J.J.; DeVries, K.R.; Martin, T.P.

    1987-01-01

    Expert robots enhance a safety and operations in nuclear plants. E.I. du Pont de Nemours and Company, Savannah River Laboratory, is developing expert mobile robots for deployment in nuclear applications at the Savannah River Plant. Knowledge-based expert systems are being evaluated to simplify operator control, to assist in navigation and manipulation functions, and to analyze sensory information. Development work using two research vehicles is underway to demonstrate semiautonomous, intelligence, expert robot system operation in process areas. A description of the mechanical equipment, control systems, and operating modes is presented, including the integration of onboard sensors. A control hierarchy that uses modest computational methods is being used to allow mobile robots to autonomously navigate and perform tasks in known environments without the need for large computer systems

  20. Expert robots in nuclear plants

    International Nuclear Information System (INIS)

    Byrd, J.S.; Fisher, J.J.; DeVries, K.R.; Martin, T.P.

    1987-01-01

    Expert robots will enhance safety and operations in nuclear plants. E. I. du Pont de Nemours and Company, Savannah River Laboratory, is developing expert mobile robots for deployment in nuclear applications at the Savannah River Plant. Knowledge-based expert systems are being evaluated to simplify operator control, to assist in navigation and manipulation functions, and to analyze sensory information. Development work using two research vehicles is underway to demonstrate semiautonomous, intelligent, expert robot system operation in process areas. A description of the mechanical equipment, control systems, and operating modes is presented, including the integration of onboard sensors. A control hierarchy that uses modest computational methods is being used to allow mobile robots to autonomously navigate and perform tasks in known environments without the need for large computer systems

  1. A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2009-09-01

    Full Text Available This paper presents a novel two-step autonomous navigation method for search and rescue robot. The algorithm based on the vision is proposed for terrain identification to give a prediction of the safest path with the support vector regression machine (SVRM trained off-line with the texture feature and color features. And correction algorithm of the prediction based the vibration information is developed during the robot traveling, using the judgment function given in the paper. The region with fault prediction will be corrected with the real traversability value and be used to update the SVRM. The experiment demonstrates that this method could help the robot to find the optimal path and be protected from the trap brought from the error between prediction and the real environment.

  2. Navigating the pathway to robotic competency in general thoracic surgery.

    Science.gov (United States)

    Seder, Christopher W; Cassivi, Stephen D; Wigle, Dennis A

    2013-01-01

    Although robotic technology has addressed many of the limitations of traditional videoscopic surgery, robotic surgery has not gained widespread acceptance in the general thoracic community. We report our initial robotic surgery experience and propose a structured, competency-based pathway for the development of robotic skills. Between December 2008 and February 2012, a total of 79 robot-assisted pulmonary, mediastinal, benign esophageal, or diaphragmatic procedures were performed. Data on patient characteristics and perioperative outcomes were retrospectively collected and analyzed. During the study period, one surgeon and three residents participated in a triphasic, competency-based pathway designed to teach robotic skills. The pathway consisted of individual preclinical learning followed by mentored preclinical exercises and progressive clinical responsibility. The robot-assisted procedures performed included lung resection (n = 38), mediastinal mass resection (n = 19), hiatal or paraesophageal hernia repair (n = 12), and Heller myotomy (n = 7), among others (n = 3). There were no perioperative mortalities, with a 20% complication rate and a 3% readmission rate. Conversion to a thoracoscopic or open approach was required in eight pulmonary resections to facilitate dissection (six) or to control hemorrhage (two). Fewer major perioperative complications were observed in the later half of the experience. All residents who participated in the thoracic surgery robotic pathway perform robot-assisted procedures as part of their clinical practice. Robot-assisted thoracic surgery can be safely learned when skill acquisition is guided by a structured, competency-based pathway.

  3. From Self-Assessment to Frustration, A Small Step Towards Autonomy in Robotic Navigation.

    Directory of Open Access Journals (Sweden)

    Adrien eJauffret

    2013-10-01

    Full Text Available Autonomy and self-improvement capabilities are still challenging in the fields of robotics and machine learning. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a repertoire of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behavior’s quality, from a given fitness system in order to take correct decisions.In this work, we focus on how a second-order controller can be used to (1 manage behaviors according to the situation and (2 seek for human interactions to improve skills. Following an incremental and constructivist approach, we present a generic neural architecture, based on an online novelty detection algorithm that may be able to self-evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the previous perception. Prediction error, coming from surprising events, provides a measure of the quality of the underlying sensory-motor contingencies. We show how a simple second-order controller (emotional system based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and also succeeds in asking for help if it detects dead-lock situations.We propose that this model could be a key structure toward self-assessment and autonomy. We made several experiments that can account for such properties for two different strategies (road following and place cells based navigation in different situations.

  4. Mapping, Navigation, and Learning for Off-Road Traversal

    DEFF Research Database (Denmark)

    Konolige, Kurt; Agrawal, Motilal; Blas, Morten Rufus

    2009-01-01

    The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision......, online terrain traversability learning, visual odometry, map registration, planning, and control. At the end of 3 years, the system we developed outperformed all nine other teams in final blind tests over previously unseen terrain.......The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision...

  5. Adding memory processing behaviors to the fuzzy behaviorist-based navigation of mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Pin, F.G.; Bender, S.R.

    1996-05-01

    Most fuzzy logic-based reasoning schemes developed for robot control are fully reactive, i.e., the reasoning modules consist of fuzzy rule bases that represent direct mappings from the stimuli provided by the perception systems to the responses implemented by the motion controllers. Due to their totally reactive nature, such reasoning systems can encounter problems such as infinite loops and limit cycles. In this paper, we proposed an approach to remedy these problems by adding a memory and memory-related behaviors to basic reactive systems. Three major types of memory behaviors are addressed: memory creation, memory management, and memory utilization. These are first presented, and examples of their implementation for the recognition of limit cycles during the navigation of an autonomous robot in a priori unknown environments are then discussed.

  6. Robotic Oncological Surgery: Technology That's Here to Stay?

    Directory of Open Access Journals (Sweden)

    HRH Patel

    2009-09-01

    Full Text Available A robot functioning in an environment may exhibit various forms of behavior emerge from the interaction with its environment through sense, control and plan activities. Hence, this paper introduces a behaviour selection based navigation and obstacle avoidance algorithm with effective method for adapting robotic behavior according to the environment conditions and the navigated terrain. The developed algorithm enable the robot to select the suitable behavior in real-time to avoid obstacles based on sensory information through visual and ultrasonic sensors utilizing the robot's ability to step over obstacles, and move between surfaces of different heights. In addition, it allows the robot to react in appropriate manner to the changing conditions either by fine-tuning of behaviors or by selecting different set of behaviors to increase the efficiency of the robot over time. The presented approach has been demonstrated on quadruped robot in several different experimental environments and the paper provides an analysis of its performance.

  7. Iconic memory-based omnidirectional route panorama navigation.

    Science.gov (United States)

    Yagi, Yasushi; Imai, Kousuke; Tsuji, Kentaro; Yachida, Masahiko

    2005-01-01

    A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.

  8. ARIES: A mobile robot inspector

    International Nuclear Information System (INIS)

    Byrd, J.S.

    1995-01-01

    ARIES (Autonomous Robotic Inspection Experimental System) is a mobile robot inspection system being developed for the Department of Energy (DOE) to survey and inspect drums containing mixed and low-level radioactive waste stored in warehouses at DOE facilities. The drums are typically stacked four high and arranged in rows with three-foot aisle widths. The robot will navigate through the aisles and perform an autonomous inspection operation, typically performed by a human operator. It will make real-time decisions about the condition of the drums, maintain a database of pertinent information about each drum, and generate reports

  9. An optimized field coverage planning approach for navigation of agricultural robots in fields involving obstacle areas

    DEFF Research Database (Denmark)

    Hameed, Ibahim; Bochtis, D.; Sørensen, C.A.

    2013-01-01

    -field obstacle areas, the headland paths generation for the field and each obstacle area, the implementation of a genetic algorithm to optimize the sequence that the field robot vehicle will follow to visit the blocks, and an algorithmically generation of the task sequences derived from the farmer practices......Technological advances combined with the demand of cost efficiency and environmental considerations lead farmers to review their practices towards the adoption of new managerial approaches including enhanced automation. The application of field robots is one of the most promising advances among....... This approach has proven that it is possible to capture the practices of farmers and embed these practices in an algorithmic description providing a complete field area coverage plan in a form prepared for execution by the navigation system of a field robot....

  10. Adding navigation, artificial audition and vital sign monitoring capabilities to a telepresence mobile robot for remote home care applications.

    Science.gov (United States)

    Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois

    2017-07-01

    A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.

  11. Proxemics models for human-aware navigation in robotics: Grounding interaction and personal space models in experimental data from psychology

    OpenAIRE

    Barnaud , Marie-Lou; Morgado , Nicolas; Palluel-Germain , Richard; Diard , Julien; Spalanzani , Anne

    2014-01-01

    International audience; In order to navigate in a social environment, a robot must be aware of social spaces, which include proximity and interaction-based constraints. Previous models of interaction and personal spaces have been inspired by studies in social psychology but not systematically grounded and validated with respect to experimental data. We propose to implement personal and interaction space models in order to replicate a classical psychology experiment. Our robotic simulations ca...

  12. Optical angular constancy is maintained as a navigational control strategy when pursuing robots moving along complex pathways.

    Science.gov (United States)

    Wang, Wei; McBeath, Michael K; Sugar, Thomas G

    2015-03-24

    The optical navigational control strategy used to intercept moving targets was explored using a real-world object that travels along complex, evasive pathways. Fielders ran across a gymnasium attempting to catch a moving robot that varied in speed and direction, while ongoing position was measured using an infrared motion-capture system. Fielder running paths were compared with the predictions of three lateral control models, each based on maintaining a particular optical angle relative to the robotic target: (a) constant alignment angle (CAA), (b) constant eccentricity angle (CEA), and (c) linear optical trajectory (LOT). Findings reveal that running pathways were most consistent with maintenance of LOT and least consistent with CEA. This supports that fielders use the same optical control strategy of maintaining angular constancy using a LOT when navigating toward targets moving along complex pathways as when intercepting simple ballistic trajectories. In those cases in which a target dramatically deviates from its optical path, fielders appear to simply reset LOT parameters using a new constant angle value. Maintenance of such optical angular constancy has now been shown to work well with ballistic, complex, and evasive moving targets, confirming the LOT strategy as a robust, general-purpose optical control mechanism for navigating to intercept catchable targets, both airborne and ground based. © 2015 ARVO.

  13. A Visual-Aided Inertial Navigation and Mapping System

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-05-01

    Full Text Available State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU and a position sensor (GPS. The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers. The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i terrain analysis; ii three-dimensional (3D scene reconstruction; iii localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.

  14. Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions

    Directory of Open Access Journals (Sweden)

    William Benn

    2012-01-01

    Full Text Available This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space: this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings show that the algorithm performed strongly on solid coloured carpets, wooden, and concrete floors but had difficulty in separating colours in multicoloured floor types such as patterned carpets.

  15. OPTIMAL TOUR CONSTRUCTIONS FOR MULTIPLE MOBILE ROBOTS

    Directory of Open Access Journals (Sweden)

    AMIR A. SHAFIE

    2011-04-01

    Full Text Available The attempts to use mobile robots in a variety of environments are currently being limited by their navigational capability, thus a set of robots must be configured for one specific environment. The problem of navigating an environment is the fundamental problem in mobile robotic where various methods including exact and heuristic approaches have been proposed to solve the problem. This paper proposed a solution to the navigation problem via the use of multiple robots to explore the environment employing heuristic methods to navigate the environment using a variant of a Traveling Salesman Problem (TSP known as Multiple Traveling Salesman Problem (M-TSP.

  16. Smart Material-Actuated Flexible Tendon-Based Snake Robot

    Directory of Open Access Journals (Sweden)

    Mohiuddin Ahmed

    2016-05-01

    Full Text Available A flexible snake robot has better navigation ability compare with the existing electrical motor-based rigid snake robot, due to its excellent bending capability during navigation inside a narrow maze. This paper discusses the modelling, simulation and experiment of a flexible snake robot. The modelling consists of the kinematic analysis and the dynamic analysis of the snake robot. A platform based on the Incompletely Restrained Positioning Mechanism (IRPM is proposed, which uses the external force provided by a compliant flexible beam in each of the actuators. The compliant central column allows the configuration to achieve three degrees of freedom (3DOFs with three tendons. The proposed flexible snake robot has been built using smart material, such as electroactive polymers (EAPs, which can be activated by applying power to it. Finally, the physical prototype of the snake robot has been built. An experiment has been performed in order to justify the proposed model.

  17. Upload of Dead Reckoning Measurements for Improved Navigational Efficiency on Embedded Robotics

    Energy Technology Data Exchange (ETDEWEB)

    Tickle, Andrew J; Harvey, Paul K, E-mail: prouction_leader@hotmail.com [School of Electrical Engineering, Electronics and Computer Science, University of Liverpool, Liverpool L69 3GJ (United Kingdom)

    2011-08-17

    The process behind Dead Reckoning (DR) is simple in that a robot can know its current location via a record of its starting position, direction and speed without the need to look for landmarks or follow lines. This process allows a robot to drive around a known environment such as indoors and heavy urban areas where traditional GPS navigation would not be an option. Discussed in this paper is an improvement of a previously designed DR mechanism in DSP Builder where now the user enters the DR measurements and commands as a sequence via a keypad. This replaces the need for user to programme the details into the system by altering numerous value tags within the design one-by-one, thus making it more user-independent and easier to alter for different environments. The paper shows updated simulations for repeatability, how the keypad links to the system and where this work will lead.

  18. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.

  19. Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development.

    Science.gov (United States)

    Bostelman, Roger; Hong, Tsai; Legowik, Steven

    2016-01-01

    Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems.

  20. Mobile Robots in Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    intelligent mobile robotic devices capable of being a more natural and sociable actor in a human environment. More specific the emphasis is on safe and natural motion and navigation issues. First part of the work focus on developing a robotic system, which estimates human interest in interacting......, lawn mowers, toy pets, or as assisting technologies for care giving. If we want robots to be an even larger and more integrated part of our every- day environments, they need to become more intelligent, and behave safe and natural to the humans in the environment. This thesis deals with making...... as being able to navigate safely around one person, the robots must also be able to navigate in environments with more people. This can be environments such as pedestrian streets, hospital corridors, train stations or airports. The developed human-aware navigation strategy is enhanced to formulate...

  1. Autonomous mobile robot teams

    Science.gov (United States)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  2. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Mazen Ghandour

    2017-06-01

    Full Text Available In this paper, a novel approach for collision avoidance for indoor mobile robots based on human-robot interaction is realized. The main contribution of this work is a new technique for collision avoidance by engaging the human and the robot in generating new collision-free paths. In mobile robotics, collision avoidance is critical for the success of the robots in implementing their tasks, especially when the robots navigate in crowded and dynamic environments, which include humans. Traditional collision avoidance methods deal with the human as a dynamic obstacle, without taking into consideration that the human will also try to avoid the robot, and this causes the people and the robot to get confused, especially in crowded social places such as restaurants, hospitals, and laboratories. To avoid such scenarios, a reactive-supervised collision avoidance system for mobile robots based on human-robot interaction is implemented. In this method, both the robot and the human will collaborate in generating the collision avoidance via interaction. The person will notify the robot about the avoidance direction via interaction, and the robot will search for the optimal collision-free path on the selected direction. In case that no people interacted with the robot, it will select the navigation path autonomously and select the path that is closest to the goal location. The humans will interact with the robot using gesture recognition and Kinect sensor. To build the gesture recognition system, two models were used to classify these gestures, the first model is Back-Propagation Neural Network (BPNN, and the second model is Support Vector Machine (SVM. Furthermore, a novel collision avoidance system for avoiding the obstacles is implemented and integrated with the HRI system. The system is tested on H20 robot from DrRobot Company (Canada and a set of experiments were implemented to report the performance of the system in interacting with the human and avoiding

  3. Hand Motion-Based Remote Control Interface with Vibrotactile Feedback for Home Robots

    Directory of Open Access Journals (Sweden)

    Juan Wu

    2013-06-01

    Full Text Available This paper presents the design and implementation of a hand-held interface system for the locomotion control of home robots. A handheld controller is proposed to implement hand motion recognition and hand motion-based robot control. The handheld controller can provide a ‘connect-and-play’ service for the users to control the home robot with visual and vibrotactile feedback. Six natural hand gestures are defined for navigating the home robots. A three-axis accelerometer is used to detect the hand motions of the user. The recorded acceleration data are analysed and classified to corresponding control commands according to their characteristic curves. A vibration motor is used to provide vibrotactile feedback to the user when an improper operation is performed. The performances of the proposed hand motion-based interface and the traditional keyboard and mouse interface have been compared in robot navigation experiments. The experimental results of home robot navigation show that the success rate of the handheld controller is 13.33% higher than the PC based controller. The precision of the handheld controller is 15.4% more than that of the PC and the execution time is 24.7% less than the PC based controller. This means that the proposed hand motion-based interface is more efficient and flexible.

  4. New Control Paradigms for Resources Saving: An Approach for Mobile Robots Navigation.

    Science.gov (United States)

    Socas, Rafael; Dormido, Raquel; Dormido, Sebastián

    2018-01-18

    In this work, an event-based control scheme is presented. The proposed system has been developed to solve control problems appearing in the field of Networked Control Systems (NCS). Several models and methodologies have been proposed to measure different resources consumptions. The use of bandwidth, computational load and energy resources have been investigated. This analysis shows how the parameters of the system impacts on the resources efficiency. Moreover, the proposed system has been compared with its equivalent discrete-time solution. In the experiments, an application of NCS for mobile robots navigation has been set up and its resource usage efficiency has been analysed.

  5. Determining navigability of terrain using point cloud data.

    Science.gov (United States)

    Cockrell, Stephanie; Lee, Gregory; Newman, Wyatt

    2013-06-01

    This paper presents an algorithm to identify features of the navigation surface in front of a wheeled robot. Recent advances in mobile robotics have brought about the development of smart wheelchairs to assist disabled people, allowing them to be more independent. These robots have a human occupant and operate in real environments where they must be able to detect hazards like holes, stairs, or obstacles. Furthermore, to ensure safe navigation, wheelchairs often need to locate and navigate on ramps. The algorithm is implemented on data from a Kinect and can effectively identify these features, increasing occupant safety and allowing for a smoother ride.

  6. Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Andersen, Jens Christian

    2007-01-01

    the current position to a desired destination. This thesis presents and experimentally validates solutions for road classification, obstacle avoidance and mission execution. The road classification is based on laser scanner measurements and supported at longer ranges by vision. The road classification...... is sufficiently sensitive to separate the road from flat roadsides, and to distinguish asphalt roads from gravelled roads. The vision-based road detection uses a combination of chromaticity and edge detection to outline the traversable part of the road based on a laser scanner classified sample area....... The perception of these two sensors are utilised by a path planner to allow a number of drive modes, and especially the ability to follow road edges are investigated. The navigation mission is controlled by a script language. The navigation script controls route sequencing, junction detection, junction crossing...

  7. Intraoperative navigation of an optically tracked surgical robot.

    Science.gov (United States)

    Cornellà, Jordi; Elle, Ole Jakob; Ali, Wajid; Samset, Eigil

    2008-01-01

    This paper presents an adaptive control scheme for improving the performance of a surgical robot when it executes tasks autonomously. A commercial tracking system is used to correlate the robot with the preoperative plan as well as to correct the position of the robot when errors between the real and planned positions are detected. Due to the noisy signals provided by the tracking system, a Kalman filter is proposed to smooth the variations and to increase the stability of the system. The efficiency of the approach has been validated using rigid and flexible endoscopic tools, obtaining in both cases that the target points can be reached with an error less than 1mm. These results make the approach suitable for a range of abdominal procedures, such as autonomous repositioning of endoscopic tools or probes for percutaneous procedures.

  8. Percutaneous Sacroiliac Screw Placement: A Prospective Randomized Comparison of Robot-assisted Navigation Procedures with a Conventional Technique

    Science.gov (United States)

    Wang, Jun-Qiang; Wang, Yu; Feng, Yun; Han, Wei; Su, Yong-Gang; Liu, Wen-Yong; Zhang, Wei-Jun; Wu, Xin-Bao; Wang, Man-Yi; Fan, Yu-Bo

    2017-01-01

    wire attempts in the robot-assisted group was significantly less than that in the freehand group (median [Q1, Q3]: 1.0 [1.0,1.0] time vs. median [Q1, Q3]: 7.0 [1.0, 9.0] times; χ2 = 15.771, respectively, P < 0.001). The instrumented SI levels did not differ between both groups (from S1 to S2, χ2 = 4.760, P = 0.093). Conclusions: Accuracy of the robot-assisted technique was superior to that of the freehand technique. Robot-assisted navigation is safe for unstable posterior pelvic ring stabilization, especially in S1, but also in S2. SI screw insertion with robot-assisted navigation is clinically feasible. PMID:29067950

  9. New Control Paradigms for Resources Saving: An Approach for Mobile Robots Navigation

    Directory of Open Access Journals (Sweden)

    Rafael Socas

    2018-01-01

    Full Text Available In this work, an event-based control scheme is presented. The proposed system has been developed to solve control problems appearing in the field of Networked Control Systems (NCS. Several models and methodologies have been proposed to measure different resources consumptions. The use of bandwidth, computational load and energy resources have been investigated. This analysis shows how the parameters of the system impacts on the resources efficiency. Moreover, the proposed system has been compared with its equivalent discrete-time solution. In the experiments, an application of NCS for mobile robots navigation has been set up and its resource usage efficiency has been analysed.

  10. The KCLBOT: Exploiting RGB-D Sensor Inputs for Navigation Environment Building and Mobile Robot Localization

    Directory of Open Access Journals (Sweden)

    Evangelos Georgiou

    2011-09-01

    Full Text Available This paper presents an alternative approach to implementing a stereo camera configuration for SLAM. The approach suggested implements a simplified method using a single RGB-D camera sensor mounted on a maneuverable non-holonomic mobile robot, the KCLBOT, used for extracting image feature depth information while maneuvering. Using a defined quadratic equation, based on the calibration of the camera, a depth computation model is derived base on the HSV color space map. Using this methodology it is possible to build navigation environment maps and carry out autonomous mobile robot path following and obstacle avoidance. This paper presents a calculation model which enables the distance estimation using the RGB-D sensor from Microsoft .NET micro framework device. Experimental results are presented to validate the distance estimation methodology.

  11. Automatic Operation For A Robot Lawn Mower

    Science.gov (United States)

    Huang, Y. Y.; Cao, Z. L.; Oh, S. J.; Kattan, E. U.; Hall, E. L.

    1987-02-01

    A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.

  12. Performance Evaluation Methods for Assistive Robotic Technology

    Science.gov (United States)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  13. Robotic platform for traveling on vertical piping network

    Science.gov (United States)

    Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel; Marzolf, Athneal D

    2015-02-03

    This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.

  14. Virtual modeling of robot-assisted manipulations in abdominal surgery.

    Science.gov (United States)

    Berelavichus, Stanislav V; Karmazanovsky, Grigory G; Shirokov, Vadim S; Kubyshkin, Valeriy A; Kriger, Andrey G; Kondratyev, Evgeny V; Zakharova, Olga P

    2012-06-27

    To determine the effectiveness of using multidetector computed tomography (MDCT) data in preoperative planning of robot-assisted surgery. Fourteen patients indicated for surgery underwent MDCT using 64 and 256-slice MDCT. Before the examination, a specially constructed navigation net was placed on the patient's anterior abdominal wall. Processing of MDCT data was performed on a Brilliance Workspace 4 (Philips). Virtual vectors that imitate robotic and assistant ports were placed on the anterior abdominal wall of the 3D model of the patient, considering the individual anatomy of the patient and the technical capabilities of robotic arms. Sites for location of the ports were directed by projection on the roentgen-positive tags of the navigation net. There were no complications observed during surgery or in the post-operative period. We were able to reduce robotic arm interference during surgery. The surgical area was optimal for robotic and assistant manipulators without any need for reinstallation of the trocars. This method allows modeling of the main steps in robot-assisted intervention, optimizing operation of the manipulator and lowering the risk of injuries to internal organs.

  15. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  16. Adaptive Human aware Navigation based on Motion Pattern Analysis

    DEFF Research Database (Denmark)

    Tranberg, Søren; Svenstrup, Mikael; Andersen, Hans Jørgen

    2009-01-01

    Respecting people’s social spaces is an important prerequisite for acceptable and natural robot navigation in human environments. In this paper, we describe an adaptive system for mobile robot navigation based on estimates of whether a person seeks to interact with the robot or not. The estimates...... are based on run-time motion pattern analysis compared to stored experience in a database. Using a potential field centered around the person, the robot positions itself at the most appropriate place relative to the person and the interaction status. The system is validated through qualitative tests...

  17. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Directory of Open Access Journals (Sweden)

    Addythia Saphala

    2015-10-01

    Full Text Available Robotic Operation System (ROS is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR, which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM.

  18. Ratbot automatic navigation by electrical reward stimulation based on distance measurement in unknown environments.

    Science.gov (United States)

    Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.

  19. Justification of the technical requirements of a fully functional modular robot

    Directory of Open Access Journals (Sweden)

    Shlyakhov Nikita

    2017-01-01

    Full Text Available Modular robots are characterized by limited built-in resources necessary for communication, connection and movement of modules, when performing reconfiguration tasks at rigidly interconnected elements. In developing the technological fundamentals of designing modular robots with pairwise connection mechanisms, we analysed modern hardware and model algorithms typical of a fully functional robot, which provide independent locomotion, communication, navigation, decentralized power and control. A survey of actuators, batteries, sensors, communication means, suitable for modular robotics is presented.

  20. Calibration and control for range imaging in mobile robot navigation

    Energy Technology Data Exchange (ETDEWEB)

    Dorum, O.H. [Norges Tekniske Hoegskole, Trondheim (Norway). Div. of Computer Systems and Telematics; Hoover, A. [University of South Florida, Tampa, FL (United States). Dept. of Computer Science and Engineering; Jones, J.P. [Oak Ridge National Lab., TN (United States)

    1994-06-01

    This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of view and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.

  1. Navigasi Berbasis Behavior dan Fuzzy Logic pada Simulasi Robot Bergerak Otonom

    Directory of Open Access Journals (Sweden)

    Rendyansyah

    2016-03-01

    Full Text Available Mobile robot is the robotic mechanism that is able to moved automatically. The movement of the robot automatically require a navigation system. Navigation is a method for determining the robot motion. In this study, using a method developed robot navigation behavior with fuzzy logic. The behavior of the robot is divided into several modules, such as walking, avoid obstacles, to follow walls, corridors and conditions of u-shape. In this research designed mobile robot simulation in a visual programming. Robot is equipped with seven distance sensor and divided into several groups to test the behavior that is designed, so that the behavior of the robot generate speed and steering control. Based on experiments that have been conducted shows that mobile robot simulation can run smooth on many conditions. This proves that the implementation of the formation of behavior and fuzzy logic techniques on the robot working well

  2. Robotics Potential Fields

    Directory of Open Access Journals (Sweden)

    Jordi Lucero

    2009-01-01

    Full Text Available This problem was to calculate the path a robot would take to navigate an obstacle field and get to its goal. Three obstacles were given as negative potential fields which the robot avoided, and a goal was given a positive potential field that attracted the robot. The robot decided each step based on its distance, angle, and influence from every object. After each step, the robot recalculated and determined its next step until it reached its goal. The robot's calculations and steps were simulated with Microsoft Excel.

  3. Open core control software for surgical robots.

    Science.gov (United States)

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For

  4. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    Science.gov (United States)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  5. Heuristic Decision-Making for Human-aware Navigation in Domestic Environments

    OpenAIRE

    Kirsch , Alexandra

    2016-01-01

    International audience; Robot navigation in domestic environments is still a challenge. This paper introduces a cognitively inspired decision-making method and an instantiation of it for (local) robot navigation in spatially constrained environments. We compare the method to two existing local planners with respect to efficiency, safety and legibility.

  6. Control of free-flying space robot manipulator systems

    Science.gov (United States)

    Cannon, Robert H., Jr.

    1990-01-01

    New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail.

  7. Biologically-Inspired Control Architecture for Musical Performance Robots

    Directory of Open Access Journals (Sweden)

    Jorge Solis

    2014-10-01

    Full Text Available At Waseda University, since 1990, the authors have been developing anthropomorphic musical performance robots as a means for understanding human control, introducing novel ways of interaction between musical partners and robots, and proposing applications for humanoid robots. In this paper, the design of a biologically-inspired control architecture for both an anthropomorphic flutist robot and a saxophone playing robot are described. As for the flutist robot, the authors have focused on implementing an auditory feedback system to improve the calibration procedure for the robot in order to play all the notes correctly during a performance. In particular, the proposed auditory feedback system is composed of three main modules: an Expressive Music Generator, a Feed Forward Air Pressure Control System and a Pitch Evaluation System. As for the saxophone-playing robot, a pressure-pitch controller (based on the feedback error learning to improve the sound produced by the robot during a musical performance was proposed and implemented. In both cases studied, a set of experiments are described to verify the improvements achieved while considering biologically-inspired control approaches.

  8. University of Michigan workscope for 1991 DOE University program in robotics for advanced reactors

    International Nuclear Information System (INIS)

    Wehe, D.K.

    1990-01-01

    The University of Michigan (UM) is a member of a team of researchers, including the universities of Florida, Texas, and Tennessee, along with Oak Ridge National Laboratory, developing robotic for hazardous environments. The goal of this research is to develop the intelligent and capable robots which can perform useful functions in the new generation of nuclear reactors currently under development. By augmenting human capabilities through remote robotics, increased safety, functionality, and reliability can be achieved. In accordance with the established lines of research responsibilities, our primary efforts during 1991 will continue to focus on the following areas: radiation imaging; mobile robot navigation; three-dimensional vision capabilities for navigation; and machine-intelligence. This report discuss work that has been and will be done in these areas

  9. Mapping of unknown industrial plant using ROS-based navigation mobile robot

    Science.gov (United States)

    Priyandoko, G.; Ming, T. Y.; Achmad, M. S. H.

    2017-10-01

    This research examines how humans work with teleoperated unmanned mobile robot inspection in industrial plant area resulting 2D/3D map for further critical evaluation. This experiment focuses on two parts, the way human-robot doing remote interactions using robust method and the way robot perceives the environment surround as a 2D/3D perspective map. ROS (robot operating system) as a tool was utilized in the development and implementation during the research which comes up with robust data communication method in the form of messages and topics. RGBD SLAM performs the visual mapping function to construct 2D/3D map using Kinect sensor. The results showed that the mobile robot-based teleoperated system are successful to extend human perspective in term of remote surveillance in large area of industrial plant. It was concluded that the proposed work is robust solution for large mapping within an unknown construction building.

  10. A Novel Randomized Search Technique for Multiple Mobile Robot Paths Planning In Repetitive Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Vahid Behravesh

    2012-08-01

    Full Text Available Presented article is studying the issue of path navigating for numerous robots. Our presented approach is based on both priority and the robust method for path finding in repetitive dynamic. Presented model can be generally implementable and useable: We do not assume any restriction regarding the quantity of levels of freedom for robots, and robots of diverse kinds can be applied at the same time. We proposed a random method and hill-climbing technique in the area based on precedence plans, which is used to determine a solution to a given trajectory planning problem and to make less the extent of total track. Our method plans trajectories for particular robots in the setting-time scope. Therefore, in order to specifying the interval of constant objects similar to other robots and the extent of the tracks which is traversed. For measuring the hazard for robots to conflict with each other it applied a method based on probability of the movements of robots. This algorithm applied to real robots with successful results. The proposed method performed and judged on both real robots and in simulation. We performed sequence of100tests with 8 robots for comparing with coordination method and current performances are effective. However, maximizing the performance is still possible. These performances estimations performed on Windows operating system and 3GHz Intel Pentium IV with and compiles with GCC 3.4. We used our PCGA robot for all experiments.  For a large environment of 19×15m2where we accomplished 40tests, our model is competent to plan high-quality paths in a severely short time (less than a second. Moreover, this article utilized lookup tables to keep expenses the formerly navigated robots made, increasing the number of robots don’t expand computation time.

  11. Optical Flow based Robot Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Kahlouche Souhila

    2008-11-01

    Full Text Available In this paper we try to develop an algorithm for visual obstacle avoidance of autonomous mobile robot. The input of the algorithm is an image sequence grabbed by an embedded camera on the B21r robot in motion. Then, the optical flow information is extracted from the image sequence in order to be used in the navigation algorithm. The optical flow provides very important information about the robot environment, like: the obstacles disposition, the robot heading, the time to collision and the depth. The strategy consists in balancing the amount of left and right side flow to avoid obstacles, this technique allows robot navigation without any collision with obstacles. The robustness of the algorithm will be showed by some examples.

  12. Development of a Novel Locomotion Algorithm for Snake Robot

    International Nuclear Information System (INIS)

    Khan, Raisuddin; Billah, Md Masum; Watanabe, Mitsuru; Shafie, A A

    2013-01-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation

  13. Biologically based neural network for mobile robot navigation

    Science.gov (United States)

    Torres Muniz, Raul E.

    1999-01-01

    The new tendency in mobile robots is to crete non-Cartesian system based on reactions to their environment. This emerging technology is known as Evolutionary Robotics, which is combined with the Biorobotic field. This new approach brings cost-effective solutions, flexibility, robustness, and dynamism into the design of mobile robots. It also provides fast reactions to the sensory inputs, and new interpretation of the environment or surroundings of the mobile robot. The Subsumption Architecture (SA) and the action selection dynamics developed by Brooks and Maes, respectively, have successfully obtained autonomous mobile robots initiating this new trend of the Evolutionary Robotics. Their design keeps the mobile robot control simple. This work present a biologically inspired modification of these schemes. The hippocampal-CA3-based neural network developed by Williams Levy is used to implement the SA, while the action selection dynamics emerge from iterations of the levels of competence implemented with the HCA3. This replacement by the HCA3 results in a closer biological model than the SA, combining the Behavior-based intelligence theory with neuroscience. The design is kept simple, and it is implemented in the Khepera Miniature Mobile Robot. The used control scheme obtains an autonomous mobile robot that can be used to execute a mail delivery system and surveillance task inside a building floor.

  14. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    Science.gov (United States)

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  15. ANALYSIS OF FREE ROUTE AIRSPACE AND PERFORMANCE BASED NAVIGATION IMPLEMENTATION IN THE EUROPEAN AIR NAVIGATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Svetlana Pavlova

    2014-12-01

    Full Text Available European Air Traffic Management system requires continuous improvements as air traffic is increasingday by day. For this purpose it was developed by international organizations Free Route Airspace and PerformanceBased Navigation concepts that allow to offer a required level of safety, capacity, environmental performance alongwith cost-effectiveness. The aim of the article is to provide detailed analysis of Free Route Airspace and PerformanceBased Navigation implementation status within European region including Ukrainian air navigation system.

  16. Current status of endovascular catheter robotics.

    Science.gov (United States)

    Lumsden, Alan B; Bismuth, Jean

    2018-06-01

    In this review, we will detail the evolution of endovascular therapy as the basis for the development of catheter-based robotics. In parallel, we will outline the evolution of robotics in the surgical space and how the convergence of technology and the entrepreneurs who push this evolution have led to the development of endovascular robots. The current state-of-the-art and future directions and potential are summarized for the reader. Information in this review has been drawn primarily from our personal clinical and preclinical experience in use of catheter robotics, coupled with some ground-breaking work reported from a few other major centers who have embraced the technology's capabilities and opportunities. Several case studies demonstrating the unique capabilities of a precisely controlled catheter are presented. Most of the preclinical work was performed in the advanced imaging and navigation laboratory. In this unique facility, the interface of advanced imaging techniques and robotic guidance is being explored. Although this procedure employs a very high-tech approach to navigation inside the endovascular space, we have conveyed the kind of opportunities that this technology affords to integrate 3D imaging and 3D control. Further, we present the opportunity of semi-autonomous motion of these devices to a target. For the interventionist, enhanced precision can be achieved in a nearly radiation-free environment.

  17. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    Science.gov (United States)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  18. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

    Science.gov (United States)

    Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika

    2017-06-01

    Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the

  19. Sistema de navegación para un robot limpiador de piscinas

    Directory of Open Access Journals (Sweden)

    Lorena Cardona Rendón

    2014-01-01

    Full Text Available In this paper presents the development of a navigation system to estimate the position, velocity and orientation of a pool cleaner robot to be automated. We employ the weighted least-square technique for the design of the navigation system, which combines the noisy measurements of a tri-axial accelerometer and a gyroscope with the solution to the differential equations that describe the robot's movement. The navigation system was tested using a (Simulink-based model of the robot obtained from a tri-dimensional representation (built with CAD software - Autodesk Inventor. The final part of the paper presents the results and draws some conclusions about the feasibility of implementing the navigation system in the automation of a swimming-pool cleaner robot.

  20. PERFORMANCE CHARACTERISTIC MEMS-BASED IMUs FOR UAVs NAVIGATION

    Directory of Open Access Journals (Sweden)

    H. A. Mohamed

    2015-08-01

    Full Text Available Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK, and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS signal outage.

  1. Performance Characteristic Mems-Based IMUs for UAVs Navigation

    Science.gov (United States)

    Mohamed, H. A.; Hansen, J. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, A. B.

    2015-08-01

    Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs) are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS) or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK), and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS) signal outage.

  2. Human-Agent Teaming for Multi-Robot Control: A Literature Review

    Science.gov (United States)

    2013-02-01

    advent of the Goggle driverless car , autonomous farm equipment, and unmanned commercial aircraft (Mosher, 2012). The inexorable trend towards...because a robot cannot be automated to navigate in difficult terrain. However, this high ratio will not be sustainable if large numbers of autonomous ...Parasuraman et al., 2007). 3.5 RoboLeader Past research indicates that autonomous cooperation between robots can improve the performance of the human

  3. Optimum path planning of mobile robot in unknown static and dynamic environments using Fuzzy-Wind Driven Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Anish Pandey

    2017-02-01

    Full Text Available This article introduces a singleton type-1 fuzzy logic system (T1-SFLS controller and Fuzzy-WDO hybrid for the autonomous mobile robot navigation and collision avoidance in an unknown static and dynamic environment. The WDO (Wind Driven Optimization algorithm is used to optimize and tune the input/output membership function parameters of the fuzzy controller. The WDO algorithm is working based on the atmospheric motion of infinitesimal small air parcels navigates over an N-dimensional search domain. The performance of this proposed technique has compared through many computer simulations and real-time experiments by using Khepera-III mobile robot. As compared to the T1-SFLS controller the Fuzzy-WDO algorithm is found good agreement for mobile robot navigation.

  4. Path Planning and Replanning for Mobile Robot Navigation on 3D Terrain: An Approach Based on Geodesic

    Directory of Open Access Journals (Sweden)

    Kun-Lin Wu

    2016-01-01

    Full Text Available In this paper, mobile robot navigation on a 3D terrain with a single obstacle is addressed. The terrain is modelled as a smooth, complete manifold with well-defined tangent planes and the hazardous region is modelled as an enclosing circle with a hazard grade tuned radius representing the obstacle projected onto the terrain to allow efficient path-obstacle intersection checking. To resolve the intersections along the initial geodesic, by resorting to the geodesic ideas from differential geometry on surfaces and manifolds, we present a geodesic-based planning and replanning algorithm as a new method for obstacle avoidance on a 3D terrain without using boundary following on the obstacle surface. The replanning algorithm generates two new paths, each a composition of two geodesics, connected via critical points whose locations are found to be heavily relying on the exploration of the terrain via directional scanning on the tangent plane at the first intersection point of the initial geodesic with the circle. An advantage of this geodesic path replanning procedure is that traversability of terrain on which the detour path traverses could be explored based on the local Gauss-Bonnet Theorem of the geodesic triangle at the planning stage. A simulation demonstrates the practicality of the analytical geodesic replanning procedure for navigating a constant speed point robot on a 3D hill-like terrain.

  5. Mobile autonomous robot for radiological surveys

    International Nuclear Information System (INIS)

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1992-01-01

    The robotics development group at the Savannah River Laboratory (SRL) is developing a mobile autonomous robot that performs radiological surveys of potentially contaminated floors. The robot is called SIMON, which stands for Semi-Intelligent Mobile Observing Navigator. Certain areas of SRL are classified as radiologically controlled areas (RCAs). In an RCA, radioactive materials are frequently handled by workers, and thus, the potential for contamination is ever present. Current methods used for floor radiological surveying includes labor-intensive manual scanning or random smearing of certain floor locations. An autonomous robot such as SIMON performs the surveying task in a much more efficient manner and will track down contamination before it is contacted by humans. SIMON scans floors at a speed of 1 in./s and stops and alarms upon encountering contamination. Its environment is well defined, consisting of smooth building floors with wide corridors. The kind of contaminations that SIMON is capable of detecting are alpha and beta-gamma. The contamination levels of interest are low to moderate

  6. Sensor Fusion for Autonomous Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Plascencia, Alfredo

    Multi-sensor data fusion is a broad area of constant research which is applied to a wide variety of fields such as the field of mobile robots. Mobile robots are complex systems where the design and implementation of sensor fusion is a complex task. But research applications are explored constantl....... The scope of the thesis is limited to building a map for a laboratory robot by fusing range readings from a sonar array with landmarks extracted from stereo vision images using the (Scale Invariant Feature Transform) SIFT algorithm....

  7. Absolute Navigation Information Estimation for Micro Planetary Rovers

    Directory of Open Access Journals (Sweden)

    Muhammad Ilyas

    2016-03-01

    Full Text Available This paper provides algorithms to estimate absolute navigation information, e.g., absolute attitude and position, by using low power, weight and volume Microelectromechanical Systems-type (MEMS sensors that are suitable for micro planetary rovers. Planetary rovers appear to be easily navigable robots due to their extreme slow speed and rotation but, unfortunately, the sensor suites available for terrestrial robots are not always available for planetary rover navigation. This makes them difficult to navigate in a completely unexplored, harsh and complex environment. Whereas the relative attitude and position can be tracked in a similar way as for ground robots, absolute navigation information, unlike in terrestrial applications, is difficult to obtain for a remote celestial body, such as Mars or the Moon. In this paper, an algorithm called the EASI algorithm (Estimation of Attitude using Sun sensor and Inclinometer is presented to estimate the absolute attitude using a MEMS-type sun sensor and inclinometer, only. Moreover, the output of the EASI algorithm is fused with MEMS gyros to produce more accurate and reliable attitude estimates. An absolute position estimation algorithm has also been presented based on these on-board sensors. Experimental results demonstrate the viability of the proposed algorithms and the sensor suite for low-cost and low-weight micro planetary rovers.

  8. Combining Hector SLAM and Artificial Potential Field for Autonomous Navigation Inside a Greenhouse

    Directory of Open Access Journals (Sweden)

    El Houssein Chouaib Harik

    2018-05-01

    Full Text Available The key factor for autonomous navigation is efficient perception of the surroundings, while being able to move safely from an initial to a final point. We deal in this paper with a wheeled mobile robot working in a GPS-denied environment typical for a greenhouse. The Hector Simultaneous Localization and Mapping (SLAM approach is used in order to estimate the robots’ pose using a LIght Detection And Ranging (LIDAR sensor. Waypoint following and obstacle avoidance are ensured by means of a new artificial potential field (APF controller presented in this paper. The combination of the Hector SLAM and the APF controller allows the mobile robot to perform periodic tasks that require autonomous navigation between predefined waypoints. It also provides the mobile robot with a robustness to changing conditions that may occur inside the greenhouse, caused by the dynamic of plant development through the season. In this study, we show that the robot is safe to operate autonomously with a human presence, and that in contrast to classical odometry methods, no calibration is needed for repositioning the robot over repetitive runs. We include here both hardware and software descriptions, as well as simulation and experimental results.

  9. Wavefront Propagation and Fuzzy Based Autonomous Navigation

    Directory of Open Access Journals (Sweden)

    Adel Al-Jumaily

    2005-06-01

    Full Text Available Path planning and obstacle avoidance are the two major issues in any navigation system. Wavefront propagation algorithm, as a good path planner, can be used to determine an optimal path. Obstacle avoidance can be achieved using possibility theory. Combining these two functions enable a robot to autonomously navigate to its destination. This paper presents the approach and results in implementing an autonomous navigation system for an indoor mobile robot. The system developed is based on a laser sensor used to retrieve data to update a two dimensional world model of therobot environment. Waypoints in the path are incorporated into the obstacle avoidance. Features such as ageing of objects and smooth motion planning are implemented to enhance efficiency and also to cater for dynamic environments.

  10. HYBRID COMMUNICATION NETWORK OF MOBILE ROBOT AND QUAD-COPTER

    Directory of Open Access Journals (Sweden)

    Moustafa M. Kurdi

    2017-01-01

    Full Text Available This paper introduces the design and development of QMRS (Quadcopter Mobile Robotic System. QMRS is a real-time obstacle avoidance capability in Belarus-132N mobile robot with the cooperation of quadcopter Phantom-4. The function of QMRS consists of GPS used by Mobile Robot and image vision and image processing system from both robot and quad-copter and by using effective searching algorithm embedded inside the robot. Having the capacity to navigate accurately is one of the major abilities of a mobile robot to effectively execute a variety of jobs including manipulation, docking, and transportation. To achieve the desired navigation accuracy, mobile robots are typically equipped with on-board sensors to observe persistent features in the environment, to estimate their pose from these observations, and to adjust their motion accordingly. Quadcopter takes off from Mobile Robot, surveys the terrain and transmits the processed Image terrestrial robot. The main objective of research paper is to focus on the full coordination between robot and quadcopter by designing an efficient wireless communication using WIFI. In addition, it identify the method involving the use of vision and image processing system from both robot and quadcopter; analyzing path in real-time and avoiding obstacles based-on the computational algorithm embedded inside the robot. QMRS increases the efficiency and reliability of the whole system especially in robot navigation, image processing and obstacle avoidance due to the help and connection among the different parts of the system.

  11. Navigation through unknown and dynamic open spaces using topological notions

    Science.gov (United States)

    Miguel-Tomé, Sergio

    2018-04-01

    Until now, most algorithms used for navigation have had the purpose of directing system towards one point in space. However, humans communicate tasks by specifying spatial relations among elements or places. In addition, the environments in which humans develop their activities are extremely dynamic. The only option that allows for successful navigation in dynamic and unknown environments is making real-time decisions. Therefore, robots capable of collaborating closely with human beings must be able to make decisions based on the local information registered by the sensors and interpret and express spatial relations. Furthermore, when one person is asked to perform a task in an environment, this task is communicated given a category of goals so the person does not need to be supervised. Thus, two problems appear when one wants to create multifunctional robots: how to navigate in dynamic and unknown environments using spatial relations and how to accomplish this without supervision. In this article, a new architecture to address the two cited problems is presented, called the topological qualitative navigation architecture. In previous works, a qualitative heuristic called the heuristic of topological qualitative semantics (HTQS) has been developed to establish and identify spatial relations. However, that heuristic only allows for establishing one spatial relation with a specific object. In contrast, navigation requires a temporal sequence of goals with different objects. The new architecture attains continuous generation of goals and resolves them using HTQS. Thus, the new architecture achieves autonomous navigation in dynamic or unknown open environments.

  12. 4D Dynamic Required Navigation Performance Final Report

    Science.gov (United States)

    Finkelsztein, Daniel M.; Sturdy, James L.; Alaverdi, Omeed; Hochwarth, Joachim K.

    2011-01-01

    New advanced four dimensional trajectory (4DT) procedures under consideration for the Next Generation Air Transportation System (NextGen) require an aircraft to precisely navigate relative to a moving reference such as another aircraft. Examples are Self-Separation for enroute operations and Interval Management for in-trail and merging operations. The current construct of Required Navigation Performance (RNP), defined for fixed-reference-frame navigation, is not sufficiently specified to be applicable to defining performance levels of such air-to-air procedures. An extension of RNP to air-to-air navigation would enable these advanced procedures to be implemented with a specified level of performance. The objective of this research effort was to propose new 4D Dynamic RNP constructs that account for the dynamic spatial and temporal nature of Interval Management and Self-Separation, develop mathematical models of the Dynamic RNP constructs, "Required Self-Separation Performance" and "Required Interval Management Performance," and to analyze the performance characteristics of these air-to-air procedures using the newly developed models. This final report summarizes the activities led by Raytheon, in collaboration with GE Aviation and SAIC, and presents the results from this research effort to expand the RNP concept to a dynamic 4D frame of reference.

  13. Autonomous Robot Navigation In Public Nature Park

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2005-01-01

    This extended abstract describes a project to make a robot travel autonomously across a public nature park. The challenge is to detect and follow the right path across junctions and open squares avoiding people and obstacles. The robot is equipped with a laser scanner, a (low accuracy) GPS, wheel...

  14. Embedded mobile farm robot for identification of diseased plants

    Science.gov (United States)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  15. Robotic digital subtraction angiography systems within the hybrid operating room.

    Science.gov (United States)

    Murayama, Yuichi; Irie, Koreaki; Saguchi, Takayuki; Ishibashi, Toshihiro; Ebara, Masaki; Nagashima, Hiroyasu; Isoshima, Akira; Arakawa, Hideki; Takao, Hiroyuki; Ohashi, Hiroki; Joki, Tatsuhiro; Kato, Masataka; Tani, Satoshi; Ikeuchi, Satoshi; Abe, Toshiaki

    2011-05-01

    Fully equipped high-end digital subtraction angiography (DSA) within the operating room (OR) environment has emerged as a new trend in the fields of neurosurgery and vascular surgery. To describe initial clinical experience with a robotic DSA system in the hybrid OR. A newly designed robotic DSA system (Artis zeego; Siemens AG, Forchheim, Germany) was installed in the hybrid OR. The system consists of a multiaxis robotic C arm and surgical OR table. In addition to conventional neuroendovascular procedures, the system was used as an intraoperative imaging tool for various neurosurgical procedures such as aneurysm clipping and spine instrumentation. Five hundred one neurosurgical procedures were successfully conducted in the hybrid OR with the robotic DSA. During surgical procedures such as aneurysm clipping and arteriovenous fistula treatment, intraoperative 2-/3-dimensional angiography and C-arm-based computed tomographic images (DynaCT) were easily performed without moving the OR table. Newly developed virtual navigation software (syngo iGuide; Siemens AG) can be used in frameless navigation and in access to deep-seated intracranial lesions or needle placement. This newly developed robotic DSA system provides safe and precise treatment in the fields of endovascular treatment and neurosurgery.

  16. Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2009-01-01

    Abstract—This paper introduces a new method to determine a person’s pose based on laser range measurements. Such estimates are typically a prerequisite for any human-aware robot navigation, which is the basis for effective and timeextended interaction between a mobile robot and a human. The robot......’s pose. The resulting pose estimates are used to identify humans who wish to be approached and interacted with. The interaction motion of the robot is based on adaptive potential functions centered around the person that respect the persons social spaces. The method is tested in experiments...

  17. Mobile robot trajectory tracking using noisy RSS measurements: an RFID approach.

    Science.gov (United States)

    Miah, M Suruz; Gueaieb, Wail

    2014-03-01

    Most RF beacons-based mobile robot navigation techniques rely on approximating line-of-sight (LOS) distances between the beacons and the robot. This is mostly performed using the robot's received signal strength (RSS) measurements from the beacons. However, accurate mapping between the RSS measurements and the LOS distance is almost impossible to achieve in reverberant environments. This paper presents a partially-observed feedback controller for a wheeled mobile robot where the feedback signal is in the form of noisy RSS measurements emitted from radio frequency identification (RFID) tags. The proposed controller requires neither an accurate mapping between the LOS distance and the RSS measurements, nor the linearization of the robot model. The controller performance is demonstrated through numerical simulations and real-time experiments. ©2013 Published by ISA. All rights reserved.

  18. Design of robust robotic proxemic behaviour

    NARCIS (Netherlands)

    Torta, E.; Cuijpers, R.H.; Juola, J.F.; Pol, van der D.; Mutlu, B.; Bartneck, C.; Ham, J.R.C.; Evers, V.; Kanda, T.

    2011-01-01

    Personal robots that share the same space with humans need to be socially acceptable and effective as they interact with people. In this paper we focus our attention on the definition of a behaviour-based robotic architecture that, (1) allows the robot to navigate safely in a cluttered and

  19. Composite Configuration Interventional Therapy Robot for the Microwave Ablation of Liver Tumors

    Science.gov (United States)

    Cao, Ying-Yu; Xue, Long; Qi, Bo-Jin; Jiang, Li-Pei; Deng, Shuang-Cheng; Liang, Ping; Liu, Jia

    2017-11-01

    The existing interventional therapy robots for the microwave ablation of liver tumors have a poor clinical applicability with a large volume, low positioning speed and complex automatic navigation control. To solve above problems, a composite configuration interventional therapy robot with passive and active joints is developed. The design of composite configuration reduces the size of the robot under the premise of a wide range of movement, and the robot with composite configuration can realizes rapid positioning with operation safety. The cumulative error of positioning is eliminated and the control complexity is reduced by decoupling active parts. The navigation algorithms for the robot are proposed based on solution of the inverse kinematics and geometric analysis. A simulation clinical test method is designed for the robot, and the functions of the robot and the navigation algorithms are verified by the test method. The mean error of navigation is 1.488 mm and the maximum error is 2.056 mm, and the positioning time for the ablation needle is in 10 s. The experimental results show that the designed robot can meet the clinical requirements for the microwave ablation of liver tumors. The composite configuration is proposed in development of the interventional therapy robot for the microwave ablation of liver tumors, which provides a new idea for the structural design of medical robots.

  20. Medical robotics.

    Science.gov (United States)

    Ferrigno, Giancarlo; Baroni, Guido; Casolo, Federico; De Momi, Elena; Gini, Giuseppina; Matteucci, Matteo; Pedrocchi, Alessandra

    2011-01-01

    Information and communication technology (ICT) and mechatronics play a basic role in medical robotics and computer-aided therapy. In the last three decades, in fact, ICT technology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as surgical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the recovery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algorithms(e.g., machine learning and other cognitive approaches)has largely increased the field of applications of these technologies,making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.

  1. SIMULATION OF LANDMARK APPROACH FOR WALL FOLLOWING ALGORITHM ON FIRE-FIGHTING ROBOT USING V-REP

    Directory of Open Access Journals (Sweden)

    Sumarsih Condroayu Purbarani

    2015-08-01

    Full Text Available Autonomous mobile robot has been implemented to assist humans in their daily activity. Autonomous robots have also contributed significantly in human safety. Autonomous mobile robot have been implemented to assist humans in their daily activity. Autonomous robots Have also contributed significantly in human safety. An example of the autonomous robot in the human safety sector is the fire fighting robot, which is the main topic of this paper. As an autonomous robot, the fire fighting robot needs a robust navigation ability to execute a given task in the shortest time interval. Wall-following algorithm is one of several navigating algorithm that simplifies this autonomous navigation problem. As a contribution, we propose two methods that could be combined to make the existing wall-following algorithm more robust. The combined wall-flowing algorithm will be compared to the original wall-following algorithm. By doing so, we could determine which method has more impact on the robot’s navigation robustness. Our goal is to see which method is more effective when combined with the wall-following algorithm.

  2. Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation

    Directory of Open Access Journals (Sweden)

    Jisun Lee

    2015-07-01

    Full Text Available In this study, simulation tests for gravity gradient referenced navigation (GGRN are conducted to verify the effects of various factors such as database (DB and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN. In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available.

  3. Dynamaid, an Anthropomorphic Robot for Research on Domestic Service Applications

    OpenAIRE

    Stückler, Jörg; Behnke, Sven

    2011-01-01

    Domestic tasks require three main skills from autonomous robots: robust navigation, object manipulation, and intuitive communication with the users. Most robot platforms, however, support only one or two of the above skills. In this paper we present Dynamaid, a robot platform for research on domestic service applications. For robust navigation, Dynamaid has a base with four individually steerable differential wheel pairs, which allow omnidirectional motion. For mobile manipulation, Dynamaid i...

  4. Fuzzy Logic Controller Design for Intelligent Robots

    Directory of Open Access Journals (Sweden)

    Ching-Han Chen

    2017-01-01

    Full Text Available This paper presents a fuzzy logic controller by which a robot can imitate biological behaviors such as avoiding obstacles or following walls. The proposed structure is implemented by integrating multiple ultrasonic sensors into a robot to collect data from a real-world environment. The decisions that govern the robot’s behavior and autopilot navigation are driven by a field programmable gate array- (FPGA- based fuzzy logic controller. The validity of the proposed controller was demonstrated by simulating three real-world scenarios to test the bionic behavior of a custom-built robot. The results revealed satisfactorily intelligent performance of the proposed fuzzy logic controller. The controller enabled the robot to demonstrate intelligent behaviors in complex environments. Furthermore, the robot’s bionic functions satisfied its design objectives.

  5. Performance of the Improvements of the CAESAR Robot

    Directory of Open Access Journals (Sweden)

    Riaan Stopforth

    2010-09-01

    Full Text Available Robots are able to enter concealed and unstable environments inaccessible to rescuers. Previous Urban Search And Rescue (USAR robots have experienced problems with malfunction of communication systems, traction systems, control and telemetry. These problems were accessed and improved in developing a prototype robot called CAESAR, which is an acronym for Contractible Arms Elevating Search And Rescue. Problems encountered with previous USAR robots are discussed. The mechanical, sensory and communication systems that were used on CAESAR are briefly explained. Each system was separately tested by performed experiments. Results of field tests and the robot performance experienced during a disaster scenario that was created are discussed. The capabilities of CAESAR are explained in these tests to determine if some of the problems experienced previously are solved.

  6. Topological mapping and navigation in indoor environment with invisible barcode

    International Nuclear Information System (INIS)

    Huh, Jin Wook; Chung, Woong Sik; Chung, Wan Kyun

    2006-01-01

    This paper addresses the localization and navigation problem using invisible two dimensional barcodes on the floor. Compared with other methods using natural/artificial landmark, the proposed localization method has great advantages in cost and appearance, since the location of the robot is perfectly known using the barcode information after the mapping is finished. We also propose a navigation algorithm which uses the topological structure. For the topological information, we define nodes and edges which are suitable for indoor navigation, especially for large area having multiple rooms, many walls and many static obstacles. The proposed algorithm also has an advantage that errors occurred in each node are mutually independent and can be compensated exactly after some navigation using barcode. Simulation and experimental results were performed to verify the algorithm in the barcode environment, and the result showed an excellent performance. After mapping, it is also possible to solve the kidnapped case and generate paths using topological information

  7. Approaching human performance the functionality-driven Awiwi robot hand

    CERN Document Server

    Grebenstein, Markus

    2014-01-01

    Humanoid robotics have made remarkable progress since the dawn of robotics. So why don't we have humanoid robot assistants in day-to-day life yet? This book analyzes the keys to building a successful humanoid robot for field robotics, where collisions become an unavoidable part of the game. The author argues that the design goal should be real anthropomorphism, as opposed to mere human-like appearance. He deduces three major characteristics to aim for when designing a humanoid robot, particularly robot hands: _ Robustness against impacts _ Fast dynamics _ Human-like grasping and manipulation performance   Instead of blindly copying human anatomy, this book opts for a holistic design me-tho-do-lo-gy. It analyzes human hands and existing robot hands to elucidate the important functionalities that are the building blocks toward these necessary characteristics.They are the keys to designing an anthropomorphic robot hand, as illustrated in the high performance anthropomorphic Awiwi Hand presented in this book.  ...

  8. Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method

    Directory of Open Access Journals (Sweden)

    Weihao Li

    2017-01-01

    Full Text Available In this paper, potential field method has been used to navigate a three omnidirectional wheels’ mobile robot and to avoid obstacles. The potential field method is used to overcome the local minima problem and the goals nonreachable with obstacles nearby (GNRON problem. For further consideration, model predictive control (MPC has been used to incorporate motion constraints and make the velocity more realistic and flexible. The proposed method is employed based on the kinematic model and dynamics model of the mobile robot in this paper. To show the performance of proposed control scheme, simulation studies have been carried to perform the motion process of mobile robot in specific workplace.

  9. Robotic security vehicle for exterior environments

    International Nuclear Information System (INIS)

    Klarer, P.R.; Workhoven, R.M.

    1988-01-01

    This paper describes a current effort at Sandia National Labs to develop an outdoor robotic vehicle capable of performing limited security functions autonomously in a structured environment. The present stage of development entails application of algorithms originally developed for the SIR vehicle to a testbed vehicle more appropriate to an outdoor environment. The current effort will culminate in a full scale demonstration of autonomous navigation capabilities on routine patrol and teleoperation by a human operator for alarm assessment and response. Various schemes for implementation of the robot system are discussed, as are plans for further development of the system

  10. Obstacle Avoidance of a Mobile Robot with Hierarchical Structure

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chan Gyu [Yeungnam College of Science and Technolgy, Taegu (Korea)

    2001-06-01

    This paper proposed a new hierarchical fuzzy-neural network algorithm for navigation of a mobile robot within unknown dynamic environment. Proposed navigation algorithm used the learning ability of the neural network and the feasibility of control highly nonlinear system of fuzzy theory. The proposed navigation algorithm used fuzzy algorithm for goal approach and fuzzy-network for effective collision avoidance. Some computer simulation results for a mobile robot equipped with ultrasonic range sensors show that the suggested navigation algorithm is very effective to escape in stationary and moving obstacles environment. (author). 11 refs., 14 figs., 2 tabs.

  11. New Design of Mobile Robot Path Planning with Randomly Moving Obstacles

    Directory of Open Access Journals (Sweden)

    T. A. Salih

    2013-05-01

    Full Text Available The navigation of a mobile robot in an unknown environment has always been a very challenging task. In order to achieve safe and autonomous navigation, the mobile robot needs to sense the surrounding environment and plans a collision-free path. This paper focuses on designing and implementing a mobile robot which has the ability of navigating smoothly in an unknown environment, avoiding collisions, without having to stop in front of obstacles, detecting leakage of combustible gases and transmitting a message of detection results to the civil defense unit automatically through the Internet to the E-mail. This design uses the implementation of artificial neural network (ANN on a new technology represented by Field Programmable Analog Array (FPAA for controlling the motion of the robot. The robot with the proposed controller is tested and has completed the required objective successfully.

  12. Experiments in teleoperator and autonomous control of space robotic vehicles

    Science.gov (United States)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  13. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots.

    Science.gov (United States)

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-11-25

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.

  14. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots

    Directory of Open Access Journals (Sweden)

    Tae Hyeon Nam

    2017-11-01

    Full Text Available Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.

  15. Evolving earth-based and in-situ satellite network architectures for Mars communications and navigation support

    Science.gov (United States)

    Hastrup, Rolf; Weinberg, Aaron; McOmber, Robert

    1991-09-01

    Results of on-going studies to develop navigation/telecommunications network concepts to support future robotic and human missions to Mars are presented. The performance and connectivity improvements provided by the relay network will permit use of simpler, lower performance, and less costly telecom subsystems for the in-situ mission exploration elements. Orbiting relay satellites can serve as effective navigation aids by supporting earth-based tracking as well as providing Mars-centered radiometric data for mission elements approaching, in orbit, or on the surface of Mars. The relay satellite orbits may be selected to optimize navigation aid support and communication coverage for specific mission sets.

  16. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  17. On Estimation Of The Orientation Of Mobile Robots Using Turning Functions And SONAR Information

    Directory of Open Access Journals (Sweden)

    Dorel AIORDACHIOAIE

    2003-12-01

    Full Text Available SONAR systems are widely used by some artificial objects, e.g. robots, and by animals, e.g. bats, for navigation and pattern recognition. The objective of this paper is to present a solution on the estimation of the orientation in the environment of mobile robots, in the context of navigation, using the turning function approach. The results are shown to be accurate and can be used further in the design of navigation strategies of mobile robots.

  18. Toward perception-based navigation using EgoSphere

    Science.gov (United States)

    Kawamura, Kazuhiko; Peters, R. Alan; Wilkes, Don M.; Koku, Ahmet B.; Sekman, Ali

    2002-02-01

    A method for perception-based egocentric navigation of mobile robots is described. Each robot has a local short-term memory structure called the Sensory EgoSphere (SES), which is indexed by azimuth, elevation, and time. Directional sensory processing modules write information on the SES at the location corresponding to the source direction. Each robot has a partial map of its operational area that it has received a priori. The map is populated with landmarks and is not necessarily metrically accurate. Each robot is given a goal location and a route plan. The route plan is a set of via-points that are not used directly. Instead, a robot uses each point to construct a Landmark EgoSphere (LES) a circular projection of the landmarks from the map onto an EgoSphere centered at the via-point. Under normal circumstances, the LES will be mostly unaffected by slight variations in the via-point location. Thus, the route plan is transformed into a set of via-regions each described by an LES. A robot navigates by comparing the next LES in its route plan to the current contents of its SES. It heads toward the indicated landmarks until its SES matches the LES sufficiently to indicate that the robot is near the suggested via-point. The proposed method is particularly useful for enabling the exchange of robust route informa-tion between robots under low data rate communications constraints. An example of such an exchange is given.

  19. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    Directory of Open Access Journals (Sweden)

    Jesus M. de la Cruz

    2012-02-01

    Full Text Available This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  20. Robotics for waste storage inspection: A user's perspective

    International Nuclear Information System (INIS)

    Hazen, F.B.

    1994-01-01

    Self-navigating robotic vehicles are now commercially available, and the technology supporting other important system components has also matured. Higher reliability and the obtainability of system support now make it practical to consider robotics as a way of addressing the growing operational requirement for the periodic inspection and maintenance of radioactive, hazardous, and mixed waste inventories. This paper describes preparations for the first field deployment of an autonomous container inspection robot at a Department of Energy (DOE) site. The Stored Waste Autonomous Mobile Inspector (SWAMI) is presently being completed by engineers at the Savannah River Technology Center (SRTC). It is a modified version of a commercially available robot. It has been outfitted with sensor suites and cognition that allow it to perform inspections of drum inventories and their storage facilities

  1. Cyclone: A laser scanner for mobile robot navigation

    Science.gov (United States)

    Singh, Sanjiv; West, Jay

    1991-09-01

    Researchers at Carnegie Mellon's Field Robotics Center have designed and implemented a scanning laser rangefinder. The device uses a commercially available time-of-flight ranging instrument that is capable of making up to 7200 measurements per second. The laser beam is reflected by a rotating mirror, producing up to a 360 degree view. Mounted on a robot vehicle, the scanner can be used to detect obstacles in the vehicle's path or to locate the robot on a map. This report discusses the motivation, design, and some applications of the scanner.

  2. Hydraulically actuated hexapod robots design, implementation and control

    CERN Document Server

    Nonami, Kenzo; Irawan, Addie; Daud, Mohd Razali

    2014-01-01

    Legged robots are a promising locomotion system, capable of performing tasks that conventional vehicles cannot. Even more exciting is the fact that this is a rapidly developing field of study for researchers from a variety of disciplines. However, only a few books have been published on the subject of multi-legged robots. The main objective of this book is to describe some of the major control issues concerning walking robots that the authors have faced over the past 10 years. A second objective is to focus especially on very large hydraulically driven hexapod robot locomotion weighing more than 2,000 kg, making this the first specialized book on this topic. The 10 chapters of the book touch on diverse relevant topics such as design aspects, implementation issues, modeling for control, navigation and control, force and impedance control-based walking, fully autonomous walking, walking and working tasks of hexapod robots, and the future of walking robots. The construction machines of the future will very likel...

  3. Kinematic analysis and simulation of a substation inspection robot guided by magnetic sensor

    Science.gov (United States)

    Xiao, Peng; Luan, Yiqing; Wang, Haipeng; Li, Li; Li, Jianxiang

    2017-01-01

    In order to improve the performance of the magnetic navigation system used by substation inspection robot, the kinematic characteristics is analyzed based on a simplified magnetic guiding system model, and then the simulation process is executed to verify the reasonability of the whole analysis procedure. Finally, some suggestions are extracted out, which will be helpful to guide the design of the inspection robot system in the future.

  4. Essential technologies for developing human and robot collaborative system

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1997-10-01

    In this study, we aim to develop a concept of new robot system, i.e., 'human and robot collaborative system', for the patrol of nuclear power plants. This paper deals with the two essential technologies developed for the system. One is the autonomous navigation program with human intervention function which is indispensable for human and robot collaboration. The other is the position estimation method by using gyroscope and TV image to make the estimation accuracy much higher for safe navigation. Feasibility of the position estimation method is evaluated by experiment and numerical simulation. (author)

  5. Tactile object exploration using cursor navigation sensors

    DEFF Research Database (Denmark)

    Kraft, Dirk; Bierbaum, Alexander; Kjaergaard, Morten

    2009-01-01

    In robotic applications tactile sensor systems serve the purpose of localizing a contact point and measuring contact forces. We have investigated the applicability of a sensorial device commonly used in cursor navigation technology for tactile sensing in robotics. We show the potential of this se......In robotic applications tactile sensor systems serve the purpose of localizing a contact point and measuring contact forces. We have investigated the applicability of a sensorial device commonly used in cursor navigation technology for tactile sensing in robotics. We show the potential...... of this sensor for active haptic exploration. More specifically, we present experiments and results which demonstrate the extraction of relevant object properties such as local shape, weight and elasticity using this technology. Besides its low price due to mass production and its modularity, an interesting...... aspect of this sensor is that beside a localization of contact points and measurement of the contact normal force also shear forces can be measured. This is relevant for many applications such as surface normal estimation and weight measurements. Scalable tactile sensor arrays have been developed...

  6. Attention-based navigation in mobile robots using a reconfigurable sensor

    NARCIS (Netherlands)

    Maris, M.

    2001-01-01

    In this paper, a method for visual attentional selection in mobile robots is proposed, based on amplification of the selected stimulus. Attention processing is performed on the vision sensor, which is integrated on a silicon chip and consists of a contrast sensitive retina with the ability to change

  7. Expected Navigation Flight Performance for the Magnetospheric Multiscale (MMS) Mission

    Science.gov (United States)

    Olson, Corwin; Wright, Cinnamon; Long, Anne

    2012-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four formation-flying spacecraft placed in highly eccentric elliptical orbits about the Earth. The primary scientific mission objective is to study magnetic reconnection within the Earth s magnetosphere. The baseline navigation concept is the independent estimation of each spacecraft state using GPS pseudorange measurements (referenced to an onboard Ultra Stable Oscillator) and accelerometer measurements during maneuvers. State estimation for the MMS spacecraft is performed onboard each vehicle using the Goddard Enhanced Onboard Navigation System, which is embedded in the Navigator GPS receiver. This paper describes the latest efforts to characterize expected navigation flight performance using upgraded simulation models derived from recent analyses.

  8. Underground mining robot: a CSIR project

    CSIR Research Space (South Africa)

    Green, JJ

    2012-11-01

    Full Text Available The Council for Scientific and Industrial Research (CSIR) in South Africa is currently developing a robot for the inspection of the ceiling (hanging-wall) in an underground gold mine. The robot autonomously navigates the 30 meter long by 3 meter...

  9. Software Strategy for Robotic Transperineal Prostate Therapy in Closed-Bore MRI

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S.; Csoma, Csaba; DiMaio, Simon P.; Gobbi, David G.; Fichtinger, Gabor; Tempany, Clare M.; Hata, Nobuhiko

    2009-01-01

    A software strategy to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and open-source navigation software are connected to one another via Ethernet to exchange commands, coordinates, and images. Six states of the system called “workphases” are defined based on the clinical scenario to synchronize behaviors of all components. The wizard-style user interface allows easy following of the clinical workflow. On top of this framework, the software provides features for intuitive needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MRI. These features are supported by calibration of robot and image coordinates by the fiducial-based registration. The performance test shows that the registration error of the system was 2.6 mm in the prostate area, and it displayed real-time 2D image 1.7 s after the completion of image acquisition. PMID:18982666

  10. An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry

    Science.gov (United States)

    Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro

    This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.

  11. Achievement report for fiscal 2000 on research and development of human cooperating and coexisting robot system. Research and development of rationalization in oil refining system; 2000 nendo ningen kyocho kyozongata robot system kenkyu kaihatsu seika hokokusho. Sekiyu seisei system gorika kenkyu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-06-01

    It is intended to develop a human cooperating and coexisting robot system that can move around freely during operation and shutdown of a oil refining plant to perform different works. This paper describes the achievements in fiscal 2000. With regard to the navigation and maintenance work functions, design was made on the robot induction system and its conception to perform the works after the robot has reached a place of the work. The specifications required for the robot supporting agent were made clear, and the constituting modules were designed to exchange information with the robot. Specifications were compiled for a portable remote operation device intended of operating different vehicles. Investigations were carried out on such protection technologies as interference check and shock absorbing materials to protect the robot platform. A method was developed to acquire posture and motion patterns of a human demonstrator, using only the upper half of the body, from the images captured by a head-mounted camera. Discussions were given on the specifications, systems and image processing algorithms required for vision-navigated autonomous walking, whose practicability was verified. Autonomous walking by means of map-based guidance, and hand operating technologies were also discussed. (NEDO)

  12. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  13. Evaluation of a completely robotized neurosurgical operating microscope.

    Science.gov (United States)

    Kantelhardt, Sven R; Finke, Markus; Schweikard, Achim; Giese, Alf

    2013-01-01

    Operating microscopes are essential for most neurosurgical procedures. Modern robot-assisted controls offer new possibilities, combining the advantages of conventional and automated systems. We evaluated the prototype of a completely robotized operating microscope with an integrated optical coherence tomography module. A standard operating microscope was fitted with motors and control instruments, with the manual control mode and balance preserved. In the robot mode, the microscope was steered by a remote control that could be fixed to a surgical instrument. External encoders and accelerometers tracked microscope movements. The microscope was additionally fitted with an optical coherence tomography-scanning module. The robotized microscope was tested on model systems. It could be freely positioned, without forcing the surgeon to take the hands from the instruments or avert the eyes from the oculars. Positioning error was about 1 mm, and vibration faded in 1 second. Tracking of microscope movements, combined with an autofocus function, allowed determination of the focus position within the 3-dimensional space. This constituted a second loop of navigation independent from conventional infrared reflector-based techniques. In the robot mode, automated optical coherence tomography scanning of large surface areas was feasible. The prototype of a robotized optical coherence tomography-integrated operating microscope combines the advantages of a conventional manually controlled operating microscope with a remote-controlled positioning aid and a self-navigating microscope system that performs automated positioning tasks such as surface scans. This demonstrates that, in the future, operating microscopes may be used to acquire intraoperative spatial data, volume changes, and structural data of brain or brain tumor tissue.

  14. Eliminating drift of the head gesture reference to enhance Google Glass-based control of an NAO humanoid robot

    Directory of Open Access Journals (Sweden)

    Xiaoqian Mao

    2017-03-01

    Full Text Available This article presents a strategy for hand-free control of an NAO humanoid robot via head gesture detected by Google Glass-based multi-sensor fusion. First, we introduce a Google Glass-based robot system by integrating the Google Glass and the NAO humanoid robot, which is able to send robot commands through Wi-Fi communications between the Google Glass and the robot. Second, we detect the operator’s head gestures by processing data from multiple sensors including accelerometers, geomagnetic sensors and gyroscopes. Next, we use a complementary filter to eliminate drift of the head gesture reference, which greatly improves the control performance. This is accomplished by the high-pass filter component on the control signal. Finally, we conduct obstacle avoidance experiments while navigating the robot to validate the effectiveness and reliability of this system. The experimental results show that the robot is smoothly navigated from its initial position to its destination with obstacle avoidance via the Google Glass. This hands-free control system can benefit those with paralysed limbs.

  15. Comparison of precision and speed in laparoscopic and robot-assisted surgical task performance.

    Science.gov (United States)

    Zihni, Ahmed; Gerull, William D; Cavallo, Jaime A; Ge, Tianjia; Ray, Shuddhadeb; Chiu, Jason; Brunt, L Michael; Awad, Michael M

    2018-03-01

    Robotic platforms have the potential advantage of providing additional dexterity and precision to surgeons while performing complex laparoscopic tasks, especially for those in training. Few quantitative evaluations of surgical task performance comparing laparoscopic and robotic platforms among surgeons of varying experience levels have been done. We compared measures of quality and efficiency of Fundamentals of Laparoscopic Surgery task performance on these platforms in novices and experienced laparoscopic and robotic surgeons. Fourteen novices, 12 expert laparoscopic surgeons (>100 laparoscopic procedures performed, no robotics experience), and five expert robotic surgeons (>25 robotic procedures performed) performed three Fundamentals of Laparoscopic Surgery tasks on both laparoscopic and robotic platforms: peg transfer (PT), pattern cutting (PC), and intracorporeal suturing. All tasks were repeated three times by each subject on each platform in a randomized order. Mean completion times and mean errors per trial (EPT) were calculated for each task on both platforms. Results were compared using Student's t-test (P task performance was slower on the robotic platform compared with laparoscopy. In comparisons of expert laparoscopists performing tasks on the laparoscopic platform and expert robotic surgeons performing tasks on the robotic platform, expert robotic surgeons demonstrated fewer errors during the PC task (P = 0.009). Robotic assistance provided a reduction in errors at all experience levels for some laparoscopic tasks, but no benefit in the speed of task performance. Robotic assistance may provide some benefit in precision of surgical task performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Ultrasound-based tumor movement compensation during navigated laparoscopic liver interventions.

    Science.gov (United States)

    Shahin, Osama; Beširević, Armin; Kleemann, Markus; Schlaefer, Alexander

    2014-05-01

    Image-guided navigation aims to provide better orientation and accuracy in laparoscopic interventions. However, the ability of the navigation system to reflect anatomical changes and maintain high accuracy during the procedure is crucial. This is particularly challenging in soft organs such as the liver, where surgical manipulation causes significant tumor movements. We propose a fast approach to obtain an accurate estimation of the tumor position throughout the procedure. Initially, a three-dimensional (3D) ultrasound image is reconstructed and the tumor is segmented. During surgery, the position of the tumor is updated based on newly acquired tracked ultrasound images. The initial segmentation of the tumor is used to automatically detect the tumor and update its position in the navigation system. Two experiments were conducted. First, a controlled phantom motion using a robot was performed to validate the tracking accuracy. Second, a needle navigation scenario based on pseudotumors injected into ex vivo porcine liver was studied. In the robot-based evaluation, the approach estimated the target location with an accuracy of 0.4 ± 0.3 mm. The mean navigation error in the needle experiment was 1.2 ± 0.6 mm, and the algorithm compensated for tumor shifts up to 38 mm in an average time of 1 s. We demonstrated a navigation approach based on tracked laparoscopic ultrasound (LUS), and focused on the neighborhood of the tumor. Our experimental results indicate that this approach can be used to quickly and accurately compensate for tumor movements caused by surgical manipulation during laparoscopic interventions. The proposed approach has the advantage of being based on the routinely used LUS; however, it upgrades its functionality to estimate the tumor position in 3D. Hence, the approach is repeatable throughout surgery, and enables high navigation accuracy to be maintained.

  17. An intelligent inspection and survey robot. Volume 1

    International Nuclear Information System (INIS)

    1995-01-01

    ARIES number-sign 1 (Autonomous Robotic Inspection Experimental System), has been developed for the Department of Energy to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. The drums are typically stacked four high and arranged in rows with three-foot aisle widths. The robot will navigate through the aisles and perform an inspection operation, typically performed by a human operator, making decisions about the condition of the drums and maintaining a database of pertinent information about each drum. A new version of the Cybermotion series of mobile robots is the base mobile vehicle for ARIES. The new Model K3A consists of an improved and enhanced mobile platform and a new turret that will permit turning around in a three-foot aisle. Advanced sonar and lidar systems were added to improve navigation in the narrow drum aisles. Onboard computer enhancements include a VMEbus computer system running the VxWorks real-time operating system. A graphical offboard supervisory UNIX workstation is used for high-level planning, control, monitoring, and reporting. A camera positioning system (CPS) includes primitive instructions for the robot to use in referencing and positioning the payload. The CPS retracts to a more compact position when traveling in the open warehouse. During inspection, the CPS extends up to deploy inspection packages at different heights on the four-drum stacks of 55-, 85-, and 110-gallon drums. The vision inspection module performs a visual inspection of the waste drums. This system will locate and identify each drum, locate any unique visual features, characterize relevant surface features of interest and update a data-base containing the inspection data

  18. An intelligent inspection and survey robot. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-15

    ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System), has been developed for the Department of Energy to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. The drums are typically stacked four high and arranged in rows with three-foot aisle widths. The robot will navigate through the aisles and perform an inspection operation, typically performed by a human operator, making decisions about the condition of the drums and maintaining a database of pertinent information about each drum. A new version of the Cybermotion series of mobile robots is the base mobile vehicle for ARIES. The new Model K3A consists of an improved and enhanced mobile platform and a new turret that will permit turning around in a three-foot aisle. Advanced sonar and lidar systems were added to improve navigation in the narrow drum aisles. Onboard computer enhancements include a VMEbus computer system running the VxWorks real-time operating system. A graphical offboard supervisory UNIX workstation is used for high-level planning, control, monitoring, and reporting. A camera positioning system (CPS) includes primitive instructions for the robot to use in referencing and positioning the payload. The CPS retracts to a more compact position when traveling in the open warehouse. During inspection, the CPS extends up to deploy inspection packages at different heights on the four-drum stacks of 55-, 85-, and 110-gallon drums. The vision inspection module performs a visual inspection of the waste drums. This system will locate and identify each drum, locate any unique visual features, characterize relevant surface features of interest and update a data-base containing the inspection data.

  19. Tandem-robot assisted laparoscopic radical prostatectomy to improve the neurovascular bundle visualization: a feasibility study.

    Science.gov (United States)

    Han, Misop; Kim, Chunwoo; Mozer, Pierre; Schäfer, Felix; Badaan, Shadie; Vigaru, Bogdan; Tseng, Kenneth; Petrisor, Doru; Trock, Bruce; Stoianovici, Dan

    2011-02-01

    To examine the feasibility of image-guided navigation using transrectal ultrasound (TRUS) to visualize the neurovascular bundle (NVB) during robot-assisted laparoscopic radical prostatectomy (RALP). The preservation of the NVB during radical prostatectomy improves the postoperative recovery of sexual potency. The accompanying blood vessels in the NVB can serve as a macroscopic landmark to localize the microscopic cavernous nerves in the NVB. A novel, robotic transrectal ultrasound probe manipulator (TRUS Robot) and three-dimensional (3-D) reconstruction software were developed and used concurrently with the daVinci surgical robot (Intuitive Surgical, Inc., Sunnyvale, CA) in a tandem-robot assisted laparoscopic radical prostatectomy (T-RALP). After appropriate approval and informed consent were obtained, 3 subjects underwent T-RALP without associated complications. The TRUS Robot allowed a steady handling and remote manipulation of the TRUS probe during T-RALP. It also tracked the TRUS probe position accurately and allowed 3-D image reconstruction of the prostate and surrounding structures. Image navigation was performed by observing the tips of the daVinci surgical instruments in the live TRUS image. Blood vessels in the NVB were visualized using Doppler ultrasound. Intraoperative 3-D image-guided navigation in T-RALP is feasible. The use of TRUS during radical prostatectomy can potentially improve the visualization and preservation of the NVB. Further studies are needed to assess the clinical benefit of T-RALP. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Behavior coordination of mobile robotics using supervisory control of fuzzy discrete event systems.

    Science.gov (United States)

    Jayasiri, Awantha; Mann, George K I; Gosine, Raymond G

    2011-10-01

    In order to incorporate the uncertainty and impreciseness present in real-world event-driven asynchronous systems, fuzzy discrete event systems (DESs) (FDESs) have been proposed as an extension to crisp DESs. In this paper, first, we propose an extension to the supervisory control theory of FDES by redefining fuzzy controllable and uncontrollable events. The proposed supervisor is capable of enabling feasible uncontrollable and controllable events with different possibilities. Then, the extended supervisory control framework of FDES is employed to model and control several navigational tasks of a mobile robot using the behavior-based approach. The robot has limited sensory capabilities, and the navigations have been performed in several unmodeled environments. The reactive and deliberative behaviors of the mobile robotic system are weighted through fuzzy uncontrollable and controllable events, respectively. By employing the proposed supervisory controller, a command-fusion-type behavior coordination is achieved. The observability of fuzzy events is incorporated to represent the sensory imprecision. As a systematic analysis of the system, a fuzzy-state-based controllability measure is introduced. The approach is implemented in both simulation and real time. A performance evaluation is performed to quantitatively estimate the validity of the proposed approach over its counterparts.

  1. Wireless Cortical Brain-Machine Interface for Whole-Body Navigation in Primates

    Science.gov (United States)

    Rajangam, Sankaranarayani; Tseng, Po-He; Yin, Allen; Lehew, Gary; Schwarz, David; Lebedev, Mikhail A.; Nicolelis, Miguel A. L.

    2016-03-01

    Several groups have developed brain-machine-interfaces (BMIs) that allow primates to use cortical activity to control artificial limbs. Yet, it remains unknown whether cortical ensembles could represent the kinematics of whole-body navigation and be used to operate a BMI that moves a wheelchair continuously in space. Here we show that rhesus monkeys can learn to navigate a robotic wheelchair, using their cortical activity as the main control signal. Two monkeys were chronically implanted with multichannel microelectrode arrays that allowed wireless recordings from ensembles of premotor and sensorimotor cortical neurons. Initially, while monkeys remained seated in the robotic wheelchair, passive navigation was employed to train a linear decoder to extract 2D wheelchair kinematics from cortical activity. Next, monkeys employed the wireless BMI to translate their cortical activity into the robotic wheelchair’s translational and rotational velocities. Over time, monkeys improved their ability to navigate the wheelchair toward the location of a grape reward. The navigation was enacted by populations of cortical neurons tuned to whole-body displacement. During practice with the apparatus, we also noticed the presence of a cortical representation of the distance to reward location. These results demonstrate that intracranial BMIs could restore whole-body mobility to severely paralyzed patients in the future.

  2. Modelling and testing proxemic behaviour for humanoid robots

    NARCIS (Netherlands)

    Torta, E.; Cuijpers, R.H.; Juola, J.F.; Pol, van der D.

    2012-01-01

    Humanoid robots that share the same space with humans need to be socially acceptable and effective as they interact with people. In this paper we focus our attention on the definition of a behavior-based robotic architecture that (1) allows the robot to navigate safely in a cluttered and dynamically

  3. Mobile robots in research and development programs at the Savannah River Site

    International Nuclear Information System (INIS)

    Martin, T.P.; Byrd, J.S.; Fisher, J.J.

    1987-01-01

    Savannah River Laboratory (SRL) is developing mobile robots for deployment in nuclear applications at the Savannah River Plant (SRP). Teleoperated mobile vehicles have been successfully used for several onsite applications. Development work using two research vehicles is underway to demonstrate semi-autonomous intelligent expert robot system operation in process areas. A description of the mechanical equipment, control systems, and operating modes of these vehicles is presented, including the integration of onboard sensors. A control hierarchy that uses modest computational methods is being developed at SRL to allow vehicles to autonomously navigate and perform tasks in known environments, without the need for large computer systems. Knowledge-based expert systems are being evaluated to simplify operator control, to assist in navigation functions, and to analyze sensory information

  4. Mobile robots in research and development programs at the Savannah River site

    International Nuclear Information System (INIS)

    Martin, T.P.; Byrd, J.S.; Fisher, J.J.

    1987-01-01

    Mobile robots for deployment in nuclear applications at the Savannah River Plant (SRP) have been developed. Teleoperated mobile vehicles have been successfully used for several onsite applications. Development work using two research vehicles is underway to demonstrate semi-autonomous intelligent expert robot system operation in process areas. A description of the mechanical equipment, control systems, and operating modes of these vehicles is presented, including the integration of onboard sensors. A control hierarchy that uses modest computational methods is being developed at SRL to allow vehicles to autonomously navigate and perform tasks in known environments, without the need for large computer systems. Knowledge-based expert systems are being evaluated to simplify operator control, to assist in navigation functions, and to analyze sensory information

  5. A new method to evaluate human-robot system performance

    Science.gov (United States)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  6. Towards an Open Software Platform for Field Robots in Precision Agriculture

    Directory of Open Access Journals (Sweden)

    Kjeld Jensen

    2014-06-01

    Full Text Available Robotics in precision agriculture has the potential to improve competitiveness and increase sustainability compared to current crop production methods and has become an increasingly active area of research. Tractor guidance systems for supervised navigation and implement control have reached the market, and prototypes of field robots performing precision agriculture tasks without human intervention also exist. But research in advanced cognitive perception and behaviour that is required to enable a more efficient, reliable and safe autonomy becomes increasingly demanding due to the growing software complexity. A lack of collaboration between research groups contributes to the problem. Scientific publications describe methods and results from the work, but little field robot software is released and documented for others to use. We hypothesize that a common open software platform tailored to field robots in precision agriculture will significantly decrease development time and resources required to perform experiments due to efficient reuse of existing work across projects and robot platforms. In this work we present the FroboMind software platform and evaluate the performance when applied to precision agriculture tasks.

  7. Exploring child-robot engagement in a collaborative task

    NARCIS (Netherlands)

    Zaga, Cristina; Truong, Khiet Phuong; Lohse, M.; Evers, Vanessa

    Imagine a room with toys scattered on the floor and a robot that is motivating a small group of children to tidy up. This scenario poses real-world challenges for the robot, e.g., the robot needs to navigate autonomously in a cluttered environment, it needs to classify and grasp objects, and it

  8. Prototype Robot Pemadam Api Beroda Menggunakan Teknik Navigasi Wall Follower

    Directory of Open Access Journals (Sweden)

    Ery Safrianti

    2012-10-01

    Full Text Available Fire Robot serves to detect and extinguish the fire. The robot is controlled by the microcontroller ATMEGA8535 automatically. This robot contains of several sensors, such as 5 sets of ping parallax as a robot navigator, a set UVTron equipped with fire-detecting driver, DC motor driver L298 with two DC servo motors. The robot was developed from a prototype that has been studied previously with the addition on the hardware side of the sound activation and two sets of line detector. The robot will active if it receives input from the sound activation unit and will start to find the fire with “search the wall” navigation techniques. The line sensor was used as a door and home detector and circle the fire area.To extinguish the fire, this robot uses a fan driven by a BD139 transistor circuit. The overall test results show that the robot can detect the presence of the fire in each room. The robot also can find the fire and extinguish it within 1 minute.

  9. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  10. Robotics and remote systems applications

    International Nuclear Information System (INIS)

    Rabold, D.E.

    1996-01-01

    This article is a review of numerous remote inspection techniques in use at the Savannah River (and other) facilities. These include: (1) reactor tank inspection robot, (2) californium waste removal robot, (3) fuel rod lubrication robot, (4) cesium source manipulation robot, (5) tank 13 survey and decontamination robots, (6) hot gang valve corridor decontamination and junction box removal robots, (7) lead removal from deionizer vessels robot, (8) HB line cleanup robot, (9) remote operation of a front end loader at WIPP, (10) remote overhead video extendible robot, (11) semi-intelligent mobile observing navigator, (12) remote camera systems in the SRS canyons, (13) cameras and borescope for the DWPF, (14) Hanford waste tank camera system, (15) in-tank precipitation camera system, (16) F-area retention basin pipe crawler, (17) waste tank wall crawler and annulus camera, (18) duct inspection, and (19) deionizer resin sampling

  11. New real-time MR image-guided surgical robotic system for minimally invasive precision surgery

    Energy Technology Data Exchange (ETDEWEB)

    Hashizume, M.; Yasunaga, T.; Konishi, K. [Kyushu University, Department of Advanced Medical Initiatives, Faculty of Medical Sciences, Fukuoka (Japan); Tanoue, K.; Ieiri, S. [Kyushu University Hospital, Department of Advanced Medicine and Innovative Technology, Fukuoka (Japan); Kishi, K. [Hitachi Ltd, Mechanical Engineering Research Laboratory, Hitachinaka-Shi, Ibaraki (Japan); Nakamoto, H. [Hitachi Medical Corporation, Application Development Office, Kashiwa-Shi, Chiba (Japan); Ikeda, D. [Mizuho Ikakogyo Co. Ltd, Tokyo (Japan); Sakuma, I. [The University of Tokyo, Graduate School of Engineering, Bunkyo-Ku, Tokyo (Japan); Fujie, M. [Waseda University, Graduate School of Science and Engineering, Shinjuku-Ku, Tokyo (Japan); Dohi, T. [The University of Tokyo, Graduate School of Information Science and Technology, Bunkyo-Ku, Tokyo (Japan)

    2008-04-15

    To investigate the usefulness of a newly developed magnetic resonance (MR) image-guided surgical robotic system for minimally invasive laparoscopic surgery. The system consists of MR image guidance [interactive scan control (ISC) imaging, three-dimensional (3-D) navigation, and preoperative planning], an MR-compatible operating table, and an MR-compatible master-slave surgical manipulator that can enter the MR gantry. Using this system, we performed in vivo experiments with MR image-guided laparoscopic puncture on three pigs. We used a mimic tumor made of agarose gel and with a diameter of approximately 2 cm. All procedures were successfully performed. The operator only advanced the probe along the guidance device of the manipulator, which was adjusted on the basis of the preoperative plan, and punctured the target while maintaining the operative field using robotic forceps. The position of the probe was monitored continuously with 3-D navigation and 2-D ISC images, as well as the MR-compatible laparoscope. The ISC image was updated every 4 s; no artifact was detected. A newly developed MR image-guided surgical robotic system is feasible for an operator to perform safe and precise minimally invasive procedures. (orig.)

  12. New real-time MR image-guided surgical robotic system for minimally invasive precision surgery

    International Nuclear Information System (INIS)

    Hashizume, M.; Yasunaga, T.; Konishi, K.; Tanoue, K.; Ieiri, S.; Kishi, K.; Nakamoto, H.; Ikeda, D.; Sakuma, I.; Fujie, M.; Dohi, T.

    2008-01-01

    To investigate the usefulness of a newly developed magnetic resonance (MR) image-guided surgical robotic system for minimally invasive laparoscopic surgery. The system consists of MR image guidance [interactive scan control (ISC) imaging, three-dimensional (3-D) navigation, and preoperative planning], an MR-compatible operating table, and an MR-compatible master-slave surgical manipulator that can enter the MR gantry. Using this system, we performed in vivo experiments with MR image-guided laparoscopic puncture on three pigs. We used a mimic tumor made of agarose gel and with a diameter of approximately 2 cm. All procedures were successfully performed. The operator only advanced the probe along the guidance device of the manipulator, which was adjusted on the basis of the preoperative plan, and punctured the target while maintaining the operative field using robotic forceps. The position of the probe was monitored continuously with 3-D navigation and 2-D ISC images, as well as the MR-compatible laparoscope. The ISC image was updated every 4 s; no artifact was detected. A newly developed MR image-guided surgical robotic system is feasible for an operator to perform safe and precise minimally invasive procedures. (orig.)

  13. Autonomous navigation system and method

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  14. Training Revising Based Traversability Analysis of Complex Terrains for Mobile Robot

    Directory of Open Access Journals (Sweden)

    Rui Song

    2014-05-01

    Full Text Available Traversability analysis is one of the core issues in the autonomous navigation for mobile robots to identify the accessible area by the information of sensors on mobile robots. This paper proposed a model to analyze the traversability of complex terrains based on rough sets and training revising. The model described the traversability for mobile robots by traversability cost. Through the experiment, the paper gets the conclusion that traversability analysis model based on rough sets and training revising can be used where terrain features are rich and complex, can effectively handle the unstructured environment, and can provide reliable and effective decision rules in the autonomous navigation for mobile robots.

  15. Conference on Space and Military Applications of Automation and Robotics

    Science.gov (United States)

    1988-01-01

    Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.

  16. A simultaneous navigation and radiation evasion algorithm (SNARE)

    Energy Technology Data Exchange (ETDEWEB)

    Khasawneh, Mohammed A., E-mail: mkha@ieee.org [Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan); Jaradat, Mohammad A., E-mail: majaradat@just.edu.jo [Department of Mechanical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan); Al-Shboul, Zeina Aman M., E-mail: xeinaaman@gmail.com [Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan)

    2013-12-15

    ruggedness against rough radiation terrains, navigational performance was assessed for a U-shaped radiation field: a case typical of testing for robotics applications and a multi-island radiation environment. Under these two test environments, the algorithm was shown to perform in accordance with set optimization criteria. Simulations reveal that localization of the mobile device is achieved in compliance with design requirements leading to navigational paths that compare favorably to Dijkstra navigation in terms of the (radiation × time) product and the time needed to reach an exit. Results of these simulations also show that while there were cases of failure encountered under navigation involving the “Radiation Evasion” criterion, algorithm performed favorably well when operated to optimize the “Nearest Exit” criterion with no cases of failure reported in any of the simulations.

  17. A simultaneous navigation and radiation evasion algorithm (SNARE)

    International Nuclear Information System (INIS)

    Khasawneh, Mohammed A.; Jaradat, Mohammad A.; Al-Shboul, Zeina Aman M.

    2013-01-01

    ruggedness against rough radiation terrains, navigational performance was assessed for a U-shaped radiation field: a case typical of testing for robotics applications and a multi-island radiation environment. Under these two test environments, the algorithm was shown to perform in accordance with set optimization criteria. Simulations reveal that localization of the mobile device is achieved in compliance with design requirements leading to navigational paths that compare favorably to Dijkstra navigation in terms of the (radiation × time) product and the time needed to reach an exit. Results of these simulations also show that while there were cases of failure encountered under navigation involving the “Radiation Evasion” criterion, algorithm performed favorably well when operated to optimize the “Nearest Exit” criterion with no cases of failure reported in any of the simulations

  18. Brain Computer Interface for Micro-controller Driven Robot Based on Emotiv Sensors

    Directory of Open Access Journals (Sweden)

    Parth Gargava

    2017-08-01

    Full Text Available A Brain Computer Interface (BCI is developed to navigate a micro-controller based robot using Emotiv sensors. The BCI system has a pipeline of 5 stages- signal acquisition, pre-processing, feature extraction, classification and CUDA inter- facing. It shall aid in serving a prototype for physical movement of neurological patients who are unable to control or operate on their muscular movements. All stages of the pipeline are designed to process bodily actions like eye blinks to command navigation of the robot. This prototype works on features learning and classification centric techniques using support vector machine. The suggested pipeline, ensures successful navigation of a robot in four directions in real time with accuracy of 93 percent.

  19. A 6-DOF parallel bone-grinding robot for cervical disc replacement surgery.

    Science.gov (United States)

    Tian, Heqiang; Wang, Chenchen; Dang, Xiaoqing; Sun, Lining

    2017-12-01

    Artificial cervical disc replacement surgery has become an effective and main treatment method for cervical disease, which has become a more common and serious problem for people with sedentary work. To improve cervical disc replacement surgery significantly, a 6-DOF parallel bone-grinding robot is developed for cervical bone-grinding by image navigation and surgical plan. The bone-grinding robot including mechanical design and low level control is designed. The bone-grinding robot navigation is realized by optical positioning with spatial registration coordinate system defined. And a parametric robot bone-grinding plan and high level control have been developed for plane grinding for cervical top endplate and tail endplate grinding by a cylindrical grinding drill and spherical grinding for two articular surfaces of bones by a ball grinding drill. Finally, the surgical flow for a robot-assisted cervical disc replacement surgery procedure is present. The final experiments results verified the key technologies and performance of the robot-assisted surgery system concept excellently, which points out a promising clinical application with higher operability. Finally, study innovations, study limitations, and future works of this present study are discussed, and conclusions of this paper are also summarized further. This bone-grinding robot is still in the initial stage, and there are many problems to be solved from a clinical point of view. Moreover, the technique is promising and can give a good support for surgeons in future clinical work.

  20. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  1. IntelliTable: Inclusively-Designed Furniture with Robotic Capabilities.

    Science.gov (United States)

    Prescott, Tony J; Conran, Sebastian; Mitchinson, Ben; Cudd, Peter

    2017-01-01

    IntelliTable is a new proof-of-principle assistive technology system with robotic capabilities in the form of an elegant universal cantilever table able to move around by itself, or under user control. We describe the design and current capabilities of the table and the human-centered design methodology used in its development and initial evaluation. The IntelliTable study has delivered robotic platform programmed by a smartphone that can navigate around a typical home or care environment, avoiding obstacles, and positioning itself at the user's command. It can also be configured to navigate itself to pre-ordained places positions within an environment using ceiling tracking, responsive optical guidance and object-based sonar navigation.

  2. Role of Pectoral Fin Flexibility in Robotic Fish Performance

    Science.gov (United States)

    Bazaz Behbahani, Sanaz; Tan, Xiaobo

    2017-08-01

    Pectoral fins play a vital role in the maneuvering and locomotion of fish, and they have become an important actuation mechanism for robotic fish. In this paper, we explore the effect of flexibility of robotic fish pectoral fins on the robot locomotion performance and mechanical efficiency. A dynamic model for the robotic fish is presented, where the flexible fin is modeled as multiple rigid elements connected via torsional springs and dampers. Blade element theory is used to capture the hydrodynamic force on the fin. The model is validated with experimental results obtained on a robotic fish prototype, equipped with 3D-printed fins of different flexibility. The model is then used to analyze the impacts of fin flexibility and power/recovery stroke speed ratio on the robot swimming speed and mechanical efficiency. It is found that, in general, flexible fins demonstrate advantages over rigid fins in speed and efficiency at relatively low fin-beat frequencies, while rigid fins outperform flexible fins at higher frequencies. For a given fin flexibility, the optimal frequency for speed performance differs from the optimal frequency for mechanical efficiency. In addition, for any given fin, there is an optimal power/recovery stroke speed ratio, typically in the range of 2-3, that maximizes the speed performance. Overall, the presented model offers a promising tool for fin flexibility and gait design, to achieve speed and efficiency objectives for robotic fish actuated with pectoral fins.

  3. Merge Fuzzy Visual Servoing and GPS-Based Planning to Obtain a Proper Navigation Behavior for a Small Crop-Inspection Robot.

    Science.gov (United States)

    Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela

    2016-02-24

    The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.

  4. Design, Modeling and Control of a Biped Line-Walking Robot

    Directory of Open Access Journals (Sweden)

    Ludan Wang

    2010-12-01

    Full Text Available The subject of this paper is the design and analysis of a biped line walking robot for inspection of power transmission lines. With a novel mechanism the centroid of the robot can be concentrated on the axis of hip joint to minimize the drive torque of the hip joint. The mechanical structure of the robot is discussed, as well as forward kinematics. Dynamic model is established in this paper to analyze the inverse kinematics for motion planning. The line-walking cycle of the line-walking robot is composed of a single-support phase and a double-support phase. Locomotion of the line-walking robot is discussed in details and the obstacle-navigation process is planed according to the structure of power transmission line. To fulfill the demands of line-walking, a control system and trajectories generation method are designed for the prototype of the line-walking robot. The feasibility of this concept is then confirmed by performing experiments with a simulated line environment.

  5. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  6. ARK-2: a mobile robot that navigates autonomously in an industrial environment

    International Nuclear Information System (INIS)

    Bains, N.; Nickerson, S.; Wilkes, D.

    1995-01-01

    ARK-2 is a robot that uses a vision system based on a camera and spot laser rangefinder mounted on a pan and tilt unit for navigation. This vision system recognizes known landmarks and computes its position relative to them, thus bounding the error in its position. The vision system is also used to find known gauges, given their approximate locations, and takes readings from them. 'Approximate' in this context means the same sort of accuracy that a human would need: 'down aisle 3 on the right' suffices. ARK-2 is also equipped with the FAD (Floor Anomaly Detector) which is based on the NRC (National Research Council of Canada) BIRIS (Bi-IRIS) sensor, and keeps ARK-2 from failing into open drains or trying to negotiate large cables or pipes on the floor. ARK-2 has also been equipped with a variety of application sensors for security and safety patrol applications. Radiation sensors are used to produce contour maps of radiation levels. In order to detect fires, environmental changes and intruders, ARK-2 is equipped with smoke, temperature, humidity and gas sensors, scanning ultraviolet and infrared detectors and a microwave motion detector. In order to support autonomous, untethered operation for hours at a time, ARK-2 also has onboard systems for power, sonar-based obstacle detection, computation and communications. The project uses a UNIX environment for software development, with the onboard SPARC processor appearing as just another workstation on the LAN. Software modules include the hardware drivers, path planning, navigation, emergency stop, obstacle mapping and status monitoring. ARK-2 may also be controlled from a ROBCAD simulation. (author)

  7. Cooperative Robot Teams Applied to the Site Preparation Task

    International Nuclear Information System (INIS)

    Parker, LE

    2001-01-01

    Prior to human missions to Mars, infrastructures on Mars that support human survival must be prepared. robotic teams can assist in these advance preparations in a number of ways. This paper addresses one of these advance robotic team tasks--the site preparation task--by proposing a control structure that allows robot teams to cooperatively solve this aspect of infrastructure preparation. A key question in this context is determining how robots should make decisions on which aspect of the site preparation t6ask to address throughout the mission, especially while operating in rough terrains. This paper describes a control approach to solving this problem that is based upon the ALLIANCE architecture, combined with performance-based rough terrain navigation that addresses path planning and control of mobile robots in rough terrain environments. They present the site preparation task and the proposed cooperative control approach, followed by some of the results of the initial testing of various aspects of the system

  8. Visual identification and similarity measures used for on-line motion planning of autonomous robots in unknown environments

    Science.gov (United States)

    Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar

    2017-02-01

    In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.

  9. Implementation of a map route analysis robot: combining an Android smart device and differential-drive robotic platform

    Directory of Open Access Journals (Sweden)

    Tseng Chi-Hung

    2017-01-01

    Full Text Available This paper proposes an easy-to-implement and relatively low-cost robotic platform with capability to realize image identification, object tracking, and Google Map route planning and navigation. Based on the JAVA and Bluetooth communication architectures, the system demonstrates the integration of Android smart devices and a differential-drive robotic platform.

  10. Mobile Robot Navigation in a Corridor Using Visual Odometry

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Andersen, Nils Axel; Poulsen, Niels Kjølstad

    2009-01-01

    Incorporation of computer vision into mobile robot localization is studied in this work. It includes the generation of localization information from raw images and its fusion with the odometric pose estimation. The technique is then implemented on a small mobile robot operating at a corridor...

  11. Automation and robotics human performance

    Science.gov (United States)

    Mah, Robert W.

    1990-01-01

    The scope of this report is limited to the following: (1) assessing the feasibility of the assumptions for crew productivity during the intra-vehicular activities and extra-vehicular activities; (2) estimating the appropriate level of automation and robotics to accomplish balanced man-machine, cost-effective operations in space; (3) identifying areas where conceptually different approaches to the use of people and machines can leverage the benefits of the scenarios; and (4) recommending modifications to scenarios or developing new scenarios that will improve the expected benefits. The FY89 special assessments are grouped into the five categories shown in the report. The high level system analyses for Automation & Robotics (A&R) and Human Performance (HP) were performed under the Case Studies Technology Assessment category, whereas the detailed analyses for the critical systems and high leverage development areas were performed under the appropriate operations categories (In-Space Vehicle Operations or Planetary Surface Operations). The analysis activities planned for the Science Operations technology areas were deferred to FY90 studies. The remaining activities such as analytic tool development, graphics/video demonstrations and intelligent communicating systems software architecture were performed under the Simulation & Validations category.

  12. Design and evaluation of a continuum robot with extendable balloons

    Directory of Open Access Journals (Sweden)

    E. Y. Yarbasi

    2018-02-01

    Full Text Available This article presents the design and preliminary evaluation of a novel continuum robot actuated by two extendable balloons. Extendable balloons are utilized as the actuation mechanism of the robot, and they are attached to the tip from their slack sections. These balloons can extend very much in length without having a significant change in diameter. Employing two balloons in an axially extendable, radially rigid flexible shaft, radial strain becomes constricted, allowing high elongation. As inflated, the balloons apply a force on the wall of the tip, pushing it forward. This force enables the robot to move forward. The air is supplied to the balloons by an air compressor and its flow rate to each balloon can be independently controlled. Changing the air volumes differently in each balloon, when they are radially constricted, orients the robot, allowing navigation. Elongation and force generation capabilities and pressure data are measured for different balloons during inflation and deflation. Afterward, the robot is subjected to open field and maze-like environment navigation tests. The contribution of this study is the introduction of a novel actuation mechanism for soft robots to have extreme elongation (2000 % in order to be navigated in substantially long and narrow environments.

  13. Plenoptic Imager for Automated Surface Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous and semi-autonomous robotic systems require information about their surroundings in order to navigate properly. A video camera machine vision system can...

  14. Robotic Technology Efforts at the NASA/Johnson Space Center

    Science.gov (United States)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center’s Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of “dull, dirty or dangerous” tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center’s newest development areas can provide crew with low mass exercise capability and also augment an astronaut’s strength while wearing a space suit.This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center’s Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  15. Bilateral human-robot control for semi-autonomous UAV navigation

    NARCIS (Netherlands)

    Wopereis, Han Willem; Fumagalli, Matteo; Stramigioli, Stefano; Carloni, Raffaella

    2015-01-01

    This paper proposes a semi-autonomous bilateral control architecture for unmanned aerial vehicles. During autonomous navigation, a human operator is allowed to assist the autonomous controller of the vehicle by actively changing its navigation parameters to assist it in critical situations, such as

  16. Performance analysis of jump-gliding locomotion for miniature robotics.

    Science.gov (United States)

    Vidyasagar, A; Zufferey, Jean-Christohphe; Floreano, Dario; Kovač, M

    2015-03-26

    Recent work suggests that jumping locomotion in combination with a gliding phase can be used as an effective mobility principle in robotics. Compared to pure jumping without a gliding phase, the potential benefits of hybrid jump-gliding locomotion includes the ability to extend the distance travelled and reduce the potentially damaging impact forces upon landing. This publication evaluates the performance of jump-gliding locomotion and provides models for the analysis of the relevant dynamics of flight. It also defines a jump-gliding envelope that encompasses the range that can be achieved with jump-gliding robots and that can be used to evaluate the performance and improvement potential of jump-gliding robots. We present first a planar dynamic model and then a simplified closed form model, which allow for quantification of the distance travelled and the impact energy on landing. In order to validate the prediction of these models, we validate the model with experiments using a novel jump-gliding robot, named the 'EPFL jump-glider'. It has a mass of 16.5 g and is able to perform jumps from elevated positions, perform steered gliding flight, land safely and traverse on the ground by repetitive jumping. The experiments indicate that the developed jump-gliding model fits very well with the measured flight data using the EPFL jump-glider, confirming the benefits of jump-gliding locomotion to mobile robotics. The jump-glide envelope considerations indicate that the EPFL jump-glider, when traversing from a 2 m height, reaches 74.3% of optimal jump-gliding distance compared to pure jumping without a gliding phase which only reaches 33.4% of the optimal jump-gliding distance. Methods of further improving flight performance based on the models and inspiration from biological systems are presented providing mechanical design pathways to future jump-gliding robot designs.

  17. Towards high-speed autonomous navigation of unknown environments

    Science.gov (United States)

    Richter, Charles; Roy, Nicholas

    2015-05-01

    In this paper, we summarize recent research enabling high-speed navigation in unknown environments for dynamic robots that perceive the world through onboard sensors. Many existing solutions to this problem guarantee safety by making the conservative assumption that any unknown portion of the map may contain an obstacle, and therefore constrain planned motions to lie entirely within known free space. In this work, we observe that safety constraints may significantly limit performance and that faster navigation is possible if the planner reasons about collision with unobserved obstacles probabilistically. Our overall approach is to use machine learning to approximate the expected costs of collision using the current state of the map and the planned trajectory. Our contribution is to demonstrate fast but safe planning using a learned function to predict future collision probabilities.

  18. Mobile Robot Navigation and Obstacle Avoidance in Unstructured Outdoor Environments

    Science.gov (United States)

    2017-12-01

    to pull information from the network, it subscribes to a specific topic and is able to receive the messages that are published to that topic. In order...total artificial potential field is characterized “as the sum of an attractive potential pulling the robot toward the goal…and a repulsive potential...of robot laser_max = 20; % robot laser view horizon goaldist = 0.5; % distance metric for reaching goal goali = 1

  19. Robot Comedy Lab: experimenting with the social dynamics of live performance

    OpenAIRE

    Katevas, Kleomenis; Healey, Patrick G. T.; Harris, Matthew Tobias

    2015-01-01

    The success of live comedy depends on a performer's ability to 'work' an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examine...

  20. Spatial models for context-aware indoor navigation systems: A survey

    Directory of Open Access Journals (Sweden)

    Imad Afyouni

    2012-06-01

    Full Text Available This paper surveys indoor spatial models developed for research fields ranging from mobile robot mapping, to indoor location-based services (LBS, and most recently to context-aware navigation services applied to indoor environments. Over the past few years, several studies have evaluated the potential of spatial models for robot navigation and ubiquitous computing. In this paper we take a slightly different perspective, considering not only the underlying properties of those spatial models, but also to which degree the notion of context can be taken into account when delivering services in indoor environments. Some preliminary recommendations for the development of indoor spatial models are introduced from a context-aware perspective. A taxonomy of models is then presented and assessed with the aim of providing a flexible spatial data model for navigation purposes, and by taking into account the context dimensions.

  1. Spatial and Temporal Abstractions in POMDPs Applied to Robot Navigation

    National Research Council Canada - National Science Library

    Theocharous, Georgios; Mahadevan, Sridhar; Kaelbling, Leslie P

    2005-01-01

    Partially observable Markov decision processes (POMDPs) are a well studied paradigm for programming autonomous robots, where the robot sequentially chooses actions to achieve long term goals efficiently...

  2. Performance of Very Small Robotic Fish Equipped with CMOS Camera

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    2015-10-01

    Full Text Available Underwater robots are often used to investigate marine animals. Ideally, such robots should be in the shape of fish so that they can easily go unnoticed by aquatic animals. In addition, lacking a screw propeller, a robotic fish would be less likely to become entangled in algae and other plants. However, although such robots have been developed, their swimming speed is significantly lower than that of real fish. Since to carry out a survey of actual fish a robotic fish would be required to follow them, it is necessary to improve the performance of the propulsion system. In the present study, a small robotic fish (SAPPA was manufactured and its propulsive performance was evaluated. SAPPA was developed to swim in bodies of freshwater such as rivers, and was equipped with a small CMOS camera with a wide-angle lens in order to photograph live fish. The maximum swimming speed of the robot was determined to be 111 mm/s, and its turning radius was 125 mm. Its power consumption was as low as 1.82 W. During trials, SAPPA succeeded in recognizing a goldfish and capturing an image of it using its CMOS camera.

  3. Robot Comedy Lab: Experimenting with the Social Dynamics of Live Performance

    Directory of Open Access Journals (Sweden)

    Kleomenis eKatevas

    2015-08-01

    Full Text Available The success of live comedy depends on a performer's ability to 'work' an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examined their effects on the real-time responses of a live audience. The strength and type of responses were captured using SHOREtm computer vision analytics. The results highlight the complex, reciprocal social dynamics of performer and audience behavior. People respond more positively when the robot looks at them, negatively when it looks away and that different performative gestures elicit systematically different patterns of audience response. This demonstrates that the responses of individual audience members depend on the specific interaction they're having with the performer. This work provides insights into how to design more effective, more socially engaging, forms of robot interaction that can be used in a variety of service contexts.

  4. Survey of computer vision technology for UVA navigation

    Science.gov (United States)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are

  5. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko

    2011-01-01

    Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed

  6. Usability testing of a mobile robotic system for in-home telerehabilitation.

    Science.gov (United States)

    Boissy, Patrick; Brière, Simon; Corriveau, Hélène; Grant, Andrew; Lauria, Michel; Michaud, François

    2011-01-01

    Mobile robots designed to enhance telepresence in the support of telehealth services are being considered for numerous applications. TELEROBOT is a teleoperated mobile robotic platform equipped with videoconferencingcapabilities and designed to be used in a home environment to. In this study, learnability of the system's teleoperation interface and controls was evaluated with ten rehabilitation professionals during four training sessions in a laboratory environment and in an unknown home environment while performing the execution of a standardized evaluation protocol typically used in home care. Results show that the novice teleoperators' performances on two of the four metrics used (number of command and total time) improved significantly across training sessions (ANOVAS, phome environment during navigation tasks (r=0,77 and 0,60). With only 4 hours of training, rehabilitation professionals were able learn to teleoperate successfully TELEROBOT. However teleoperation performances remained significantly less efficient then those of an expert. Under the home task condition (navigating the home environment from one point to the other as fast as possible) this translated to completion time between 350 seconds (best performance) and 850 seconds (worse performance). Improvements in other usability aspects of the system will be needed to meet the requirements of in-home telerehabilitation.

  7. State-of-the-Art Mobile Intelligence: Enabling Robots to Move Like Humans by Estimating Mobility with Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Xue-Bo Jin

    2018-03-01

    Full Text Available Mobility is a significant robotic task. It is the most important function when robotics is applied to domains such as autonomous cars, home service robots, and autonomous underwater vehicles. Despite extensive research on this topic, robots still suffer from difficulties when moving in complex environments, especially in practical applications. Therefore, the ability to have enough intelligence while moving is a key issue for the success of robots. Researchers have proposed a variety of methods and algorithms, including navigation and tracking. To help readers swiftly understand the recent advances in methodology and algorithms for robot movement, we present this survey, which provides a detailed review of the existing methods of navigation and tracking. In particular, this survey features a relation-based architecture that enables readers to easily grasp the key points of mobile intelligence. We first outline the key problems in robot systems and point out the relationship among robotics, navigation, and tracking. We then illustrate navigation using different sensors and the fusion methods and detail the state estimation and tracking models for target maneuvering. Finally, we address several issues of deep learning as well as the mobile intelligence of robots as suggested future research topics. The contributions of this survey are threefold. First, we review the literature of navigation according to the applied sensors and fusion method. Second, we detail the models for target maneuvering and the existing tracking based on estimation, such as the Kalman filter and its series developed form, according to their model-construction mechanisms: linear, nonlinear, and non-Gaussian white noise. Third, we illustrate the artificial intelligence approach—especially deep learning methods—and discuss its combination with the estimation method.

  8. The development of robot system for pressurizer maintenance in NPPs

    International Nuclear Information System (INIS)

    Kim, Seung Ho; Kim, Chang Hoi; Jung, Seung Ho; Seo, Yong Chil; Lee, Young Kwang; Go, Byung Yung; Lee, Kwang Won; Lee, Sang Ill; Yun, Jong Yeon; Lee, Hyung Soon; Park, Mig Non; Park, Chang Woo; Cheol, Kwon

    1999-12-01

    The pressurizer that controls the pressure variation of primary coolant system, consists of a vessel, electric heaters and a spray, is one of the safety related equipment in nuclear power plants. Therefore it is required to inspect and maintain it regularly. Because the inside of pressurizer os contaminated by radioactivity, when inspection and repairing it, the radiation exposure of workers is inevitable. In this research two robot system has been developed for inspection and maintenance of the pressurizer for the water filled case and the water sunken case. The one robot system for the water filled case consists of two links, movable gripper using wire string, and support frame for the attachment of robot. The other robot is equipped propeller in order to navigate on the water. It also equipped high performance water resistance camera to make inspection possible. The developed robots are designed under several constraints such as its weight and collision with pressurizer wall. To verify the collision free robot link length and accessibility to the any desired rod heater it is simulated by 3-dimensional graphic simulation software(RobCard). For evaluation stress of the support frame finite element analysis is performed by using the ANSYS code. (author)

  9. Technological advances in robotic-assisted laparoscopic surgery.

    Science.gov (United States)

    Tan, Gerald Y; Goel, Raj K; Kaouk, Jihad H; Tewari, Ashutosh K

    2009-05-01

    In this article, the authors describe the evolution of urologic robotic systems and the current state-of-the-art features and existing limitations of the da Vinci S HD System (Intuitive Surgical, Inc.). They then review promising innovations in scaling down the footprint of robotic platforms, the early experience with mobile miniaturized in vivo robots, advances in endoscopic navigation systems using augmented reality technologies and tracking devices, the emergence of technologies for robotic natural orifice transluminal endoscopic surgery and single-port surgery, advances in flexible robotics and haptics, the development of new virtual reality simulator training platforms compatible with the existing da Vinci system, and recent experiences with remote robotic surgery and telestration.

  10. Control of free-flying space robot manipulator systems

    Science.gov (United States)

    Cannon, Robert H., Jr.

    1989-01-01

    Control techniques for self-contained, autonomous free-flying space robots are being tested and developed. Free-flying space robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require astronaut extra-vehicular activity (EVA). Use of robots will provide economic savings as well as improved astronaut safety by reducing and in many cases, eliminating the need for human EVA. The focus of the work is to develop and carry out a set of research projects using laboratory models of satellite robots. These devices use air-cushion-vehicle (ACV) technology to simulate in two dimensions the drag-free, zero-g conditions of space. Current work is divided into six major projects or research areas. Fixed-base cooperative manipulation work represents our initial entry into multiple arm cooperation and high-level control with a sophisticated user interface. The floating-base cooperative manipulation project strives to transfer some of the technologies developed in the fixed-base work onto a floating base. The global control and navigation experiment seeks to demonstrate simultaneous control of the robot manipulators and the robot base position so that tasks can be accomplished while the base is undergoing a controlled motion. The multiple-vehicle cooperation project's goal is to demonstrate multiple free-floating robots working in teams to carry out tasks too difficult or complex for a single robot to perform. The Location Enhancement Arm Push-off (LEAP) activity's goal is to provide a viable alternative to expendable gas thrusters for vehicle propulsion wherein the robot uses its manipulators to throw itself from place to place. Because the successful execution of the LEAP technique requires an accurate model of the robot and payload mass properties, it was deemed an attractive testbed for adaptive control technology.

  11. Passive mapping and intermittent exploration for mobile robots

    Science.gov (United States)

    Engleson, Sean P.

    1994-01-01

    An adaptive state space architecture is combined with diktiometric representation to provide the framework for designing a robot mapping system with flexible navigation planning tasks. This involves indexing waypoints described as expectations, geometric indexing, and perceptual indexing. Matching and updating the robot's projected position and sensory inputs with indexing waypoints involves matchers, dynamic priorities, transients, and waypoint restructuring. The robot's map learning can be opganized around the principles of passive mapping.

  12. Prototype Robot Pemadam Api Beroda Menggunakan Teknik Navigasi Wall Follower

    OpenAIRE

    Safrianti, Ery; Amri, Rahyul; Budiman, Septian

    2012-01-01

    Fire Robot serves to detect and extinguish the fire. The robot is controlled by the microcontroller ATMEGA8535 automatically. This robot contains of several sensors, such as 5 sets of ping parallax as a robot navigator, a set UVTron equipped with fire-detecting driver, DC motor driver L298 with two DC servo motors. The robot was developed from a prototype that has been studied previously with the addition on the hardware side of the sound activation and two sets of line detector. The robot wi...

  13. Warning Signals for Poor Performance Improve Human-Robot Interaction

    NARCIS (Netherlands)

    van den Brule, Rik; Bijlstra, Gijsbert; Dotsch, Ron; Haselager, Pim; Wigboldus, Daniel HJ

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot’s nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the

  14. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  15. Towards an Open Software Platform for Field Robots in Precision Agriculture

    DEFF Research Database (Denmark)

    Jensen, Kjeld; Larsen, Morten; Nielsen, Søren H

    2014-01-01

    Robotics in precision agriculture has the potential to improve competitiveness and increase sustainability compared to current crop production methods and has become an increasingly active area of research. Tractor guidance systems for supervised navigation and implement control have reached...... the market, and prototypes of field robots performing precision agriculture tasks without human intervention also exist. But research in advanced cognitive perception and behaviour that is required to enable a more efficient, reliable and safe autonomy becomes increasingly demanding due to the growing...... software complexity. A lack of collaboration between research groups contributes to the problem. Scientific publications describe methods and results from the work, but little field robot software is released and documented for others to use. We hypothesize that a common open software platform tailored...

  16. Image-based particle filtering for navigation in a semi-structured agricultural environment

    NARCIS (Netherlands)

    Hiremath, S.; van Evert, F.K.; ter Braak, C.J.F.; Stein, A.; van der Heijden, G.

    2014-01-01

    Autonomous navigation of field robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. The drawback of existing systems is the lack of robustness to these uncertainties. In this study we propose a vision-based navigation method to address these

  17. Development of a precision multimodal surgical navigation system for lung robotic segmentectomy.

    Science.gov (United States)

    Baste, Jean Marc; Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe

    2018-04-01

    Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci System TM . Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making.

  18. Application of autonomous robotics to surveillance of waste storage containers for radioactive surface contamination

    International Nuclear Information System (INIS)

    Sweeney, F.J.; Beckerman, M.; Butler, P.L.; Jones, J.P.; Reister, D.B.

    1991-01-01

    This paper describes a proof-of-principal demonstration performed with the HERMIES-III mobile robot to automate the inspection of waste storage drums for radioactive surface contamination and thereby reduce the human burden of operating a robot and worker exposure to potentially hazardous environments. Software and hardware for the demonstration were developed by a team consisting of Oak Ridge National Laboratory, and the Universities of Florida, Michigan, Tennessee, and Texas. Robot navigation, machine vision, manipulator control, parallel processing and human-machine interface techniques developed by the team were demonstrated utilizing advanced computer architectures. The demonstration consists of over 100,000 lines of computer code executing on nine computers

  19. GPS/MEMS IMU/Microprocessor Board for Navigation

    Science.gov (United States)

    Gender, Thomas K.; Chow, James; Ott, William E.

    2009-01-01

    A miniaturized instrumentation package comprising a (1) Global Positioning System (GPS) receiver, (2) an inertial measurement unit (IMU) consisting largely of surface-micromachined sensors of the microelectromechanical systems (MEMS) type, and (3) a microprocessor, all residing on a single circuit board, is part of the navigation system of a compact robotic spacecraft intended to be released from a larger spacecraft [e.g., the International Space Station (ISS)] for exterior visual inspection of the larger spacecraft. Variants of the package may also be useful in terrestrial collision-detection and -avoidance applications. The navigation solution obtained by integrating the IMU outputs is fed back to a correlator in the GPS receiver to aid in tracking GPS signals. The raw GPS and IMU data are blended in a Kalman filter to obtain an optimal navigation solution, which can be supplemented by range and velocity data obtained by use of (l) a stereoscopic pair of electronic cameras aboard the robotic spacecraft and/or (2) a laser dynamic range imager aboard the ISS. The novelty of the package lies mostly in those aspects of the design of the MEMS IMU that pertain to controlling mechanical resonances and stabilizing scale factors and biases.

  20. Intelligent Robot-assisted Humanitarian Search and Rescue System

    Directory of Open Access Journals (Sweden)

    Henry Y. K. Lau

    2009-11-01

    Full Text Available The unprecedented scale and number of natural and man-made disasters in the past decade has urged international emergency search and rescue communities to seek for novel technology to enhance operation efficiency. Tele-operated search and rescue robots that can navigate deep into rubble to search for victims and to transfer critical field data back to the control console has gained much interest among emergency response institutions. In response to this need, a low-cost autonomous mini robot equipped with thermal sensor, accelerometer, sonar, pin-hole camera, microphone, ultra-bright LED and wireless communication module is developed to study the control of a group of decentralized mini search and rescue robots. The robot can navigate autonomously between voids to look for living body heat and can send back audio and video information to allow the operator to determine if the found object is a living human. This paper introduces the design and control of a low-cost robotic search and rescue system based on an immuno control framework developed for controlling decentralized systems. Design and development of the physical prototype and the immunity-based control system are described in this paper.

  1. Intelligent Robot-Assisted Humanitarian Search and Rescue System

    Directory of Open Access Journals (Sweden)

    Albert W. Y. Ko

    2009-06-01

    Full Text Available The unprecedented scale and number of natural and man-made disasters in the past decade has urged international emergency search and rescue communities to seek for novel technology to enhance operation efficiency. Tele-operated search and rescue robots that can navigate deep into rubble to search for victims and to transfer critical field data back to the control console has gained much interest among emergency response institutions. In response to this need, a low-cost autonomous mini robot equipped with thermal sensor, accelerometer, sonar, pin-hole camera, microphone, ultra-bright LED and wireless communication module is developed to study the control of a group of decentralized mini search and rescue robots. The robot can navigate autonomously between voids to look for living body heat and can send back audio and video information to allow the operator to determine if the found object is a living human. This paper introduces the design and control of a low-cost robotic search and rescue system based on an immuno control framework developed for controlling decentralized systems. Design and development of the physical prototype and the immunity-based control system are described in this paper.

  2. Robot performing heavy gymnastics. Kikai taiso wo suru robot

    Energy Technology Data Exchange (ETDEWEB)

    Takashima, S. (Hosei Univ., Tokyo (Japan). Faculty of Engineering)

    1991-11-01

    Methods of simulation of the motion of human bodies and the control of the motion of bobots are sdudied in order to realize robots to perform gymnastics on a horizontal bar. A model of the human body structure is presented by dividing the human body into 8 parts: right and left arms, the head, the trunk, the right and left thighs, and the right and left foot, and a system is constructed by combination of the links of the rigid partswith an assumption on each link for simplification. A method to enhance the swing motion is devised in order to produce a suspension motionaas a basic movement of horizontal bar gymnastics. The basic condition to control the horizontal bar gynnastics and the control system of an articulation angle are considered. Two algorithms are presented in order to enhance the swing motion and to maintain suspension swing: excitation of the swing by a vertical motion of the center of gravity and excitation by the use of natural frequency. Computer simulation of suspension swing is executed and the results are shown in a figure. A prototype robot to perform horizontal bar gymnastics is manufactured and performs suspension swing, starting of swing, kip motion and giant swing. The concept of optimization is not included concretely in the prototype. 22 refs., 8 figs.

  3. An intelligent inspection and survey robot

    International Nuclear Information System (INIS)

    Byrd, J.S.

    1995-01-01

    Large quantities of mixed and low-level radioactive waste contained in 55-, 85-, and 110-gallon steel drums are stored at Department of Energy (DOE) warehouses located throughout the United States. The steel drums are placed on pallets and stacked on top of one another, forming a column of drums ranging in heights of one to four drums and up to 16 feet high. The columns of drums are aligned in rows forming an aisle approximately three feet wide between the rows of drums. Tens of thousands of drums are stored in these warehouses throughout the DOE complex. ARIES (Autonomous Robotic Inspection Experimental System), is under development for the DOE to survey and inspect these drums. The robot will navigate through the aisles and perform an inspection operation, typically performed by a human operator, making decisions about the condition of the drums and maintaining a database of pertinent information about each drum

  4. An intelligent inspection and survey robot

    Energy Technology Data Exchange (ETDEWEB)

    Byrd, J.S. [Univ. of South Carolina, Columbia, SC (United States)

    1995-10-01

    Large quantities of mixed and low-level radioactive waste contained in 55-, 85-, and 110-gallon steel drums are stored at Department of Energy (DOE) warehouses located throughout the United States. The steel drums are placed on pallets and stacked on top of one another, forming a column of drums ranging in heights of one to four drums and up to 16 feet high. The columns of drums are aligned in rows forming an aisle approximately three feet wide between the rows of drums. Tens of thousands of drums are stored in these warehouses throughout the DOE complex. ARIES (Autonomous Robotic Inspection Experimental System) is under development for the DOE to survey and inspect these drums. The robot will navigate through the aisles and perform an inspection operation, typically performed by a human operator, making decisions about the condition of the drums and maintaining a data of pertinent information about each drum.

  5. An intelligent inspection and survey robot

    International Nuclear Information System (INIS)

    Byrd, J.S.

    1995-01-01

    Large quantities of mixed and low-level radioactive waste contained in 55-, 85-, and 110-gallon steel drums are stored at Department of Energy (DOE) warehouses located throughout the United States. The steel drums are placed on pallets and stacked on top of one another, forming a column of drums ranging in heights of one to four drums and up to 16 feet high. The columns of drums are aligned in rows forming an aisle approximately three feet wide between the rows of drums. Tens of thousands of drums are stored in these warehouses throughout the DOE complex. ARIES (Autonomous Robotic Inspection Experimental System) is under development for the DOE to survey and inspect these drums. The robot will navigate through the aisles and perform an inspection operation, typically performed by a human operator, making decisions about the condition of the drums and maintaining a data of pertinent information about each drum

  6. Mobile-robot navigation with complete coverage of unstructured environments

    OpenAIRE

    García Armada, Elena; González de Santos, Pablo

    2004-01-01

    There are some mobile-robot applications that require the complete coverage of an unstructured environment. Examples are humanitarian de-mining and floor-cleaning tasks. A complete-coverage algorithm is then used, a path-planning technique that allows the robot to pass over all points in the environment, avoiding unknown obstacles. Different coverage algorithms exist, but they fail working in unstructured environments. This paper details a complete-coverage algorithm for unstructured environm...

  7. Illumination Tolerance for Visual Navigation with the Holistic Min-Warping Method

    Directory of Open Access Journals (Sweden)

    Ralf Möller

    2014-02-01

    Full Text Available Holistic visual navigation methods are an emerging alternative to the ubiquitous feature-based methods. Holistic methods match entire images pixel-wise instead of extracting and comparing local feature descriptors. In this paper we investigate which pixel-wise distance measures are most suitable for the holistic min-warping method with respect to illumination invariance. Two novel approaches are presented: tunable distance measures—weighted combinations of illumination-invariant and illumination-sensitive terms—and two novel forms of “sequential” correlation which are only invariant against intensity shifts but not against multiplicative changes. Navigation experiments on indoor image databases collected at the same locations but under different conditions of illumination demonstrate that tunable distance measures perform optimally by mixing their two portions instead of using the illumination-invariant term alone. Sequential correlation performs best among all tested methods, and as well but much faster in an approximated form. Mixing with an additional illumination-sensitive term is not necessary for sequential correlation. We show that min-warping with approximated sequential correlation can successfully be applied to visual navigation of cleaning robots.

  8. Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy.

    Science.gov (United States)

    Pessaux, Patrick; Diana, Michele; Soler, Luc; Piardi, Tullio; Mutter, Didier; Marescaux, Jacques

    2015-04-01

    Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.

  9. Navigation Method for Autonomous Robots in a Dynamic Indoor Environment

    Czech Academy of Sciences Publication Activity Database

    Věchet, Stanislav; Chen, K.-S.; Krejsa, Jiří

    2013-01-01

    Roč. 3, č. 4 (2013), s. 273-277 ISSN 2223-9766 Institutional support: RVO:61388998 Keywords : particle filters * autonomous mobile robots * mixed potential fields Subject RIV: JD - Computer Applications, Robotics http://www.ausmt.org/index.php/AUSMT/article/view/214/239

  10. Online Aerial Terrain Mapping for Ground Robot Navigation

    Directory of Open Access Journals (Sweden)

    John Peterson

    2018-02-01

    Full Text Available This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle’s overhead view to inform the ground vehicle’s path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles.

  11. Virtual Reality, 3D Stereo Visualization, and Applications in Robotics

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    , while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work investigates stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This work also...

  12. Robotic microlaryngeal phonosurgery: Testing of a "steady-hand" microsurgery platform.

    Science.gov (United States)

    Akst, Lee M; Olds, Kevin C; Balicki, Marcin; Chalasani, Preetham; Taylor, Russell H

    2018-01-01

    To evaluate gains in microlaryngeal precision achieved by using a novel robotic "steady hand" microsurgery platform in performing simulated phonosurgical tasks. Crossover comparative study of surgical performance and descriptive analysis of surgeon feedback. A novel robotic ear, nose, and throat microsurgery system (REMS) was tested in simulated phonosurgery. Participants navigated a 0.4-mm-wide microlaryngeal needle through spirals of varying widths, both with and without robotic assistance. Fail time (time the needle contacted spiral edges) was measured, and statistical comparison was performed. Participants were surveyed to provide subjective feedback on the REMS. Nine participants performed the task at three spiral widths, yielding 27 paired testing conditions. In 24 of 27 conditions, robot-assisted performance was better than unassisted; five trials were errorless, all achieved with the robot. Paired analysis of all conditions revealed fail time of 0.769 ± 0.568 seconds manually, improving to 0.284 ± 0.584 seconds with the robot (P = .003). Analysis of individual spiral sizes showed statistically better performance with the REMS at spiral widths of 2 mm (0.156 ± 0.226 seconds vs. 0.549 ± 0.545 seconds, P = .019) and 1.5 mm (0.075 ± 0.099 seconds vs. 0.890 ± 0.518 seconds, P = .002). At 1.2 mm, all nine participants together showed similar performance with and without robotic assistance (0.621 ± 0.923 seconds vs. 0.868 ± 0.634 seconds, P = .52), though subgroup analysis of five surgeons most familiar with microlaryngoscopy showed statistically better performance with the robot (0.204 ± 0.164 seconds vs. 0.664 ± 0.354 seconds, P = .036). The REMS is a novel platform with potential applications in microlaryngeal phonosurgery. Further feasibility studies and preclinical testing should be pursued as a bridge to eventual clinical use. NA. Laryngoscope, 128:126-132, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Autonomous Rule Based Robot Navigation In Orchards

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Ravn, Ole; Andersen, Nils Axel

    2010-01-01

    Orchard navigation using sensor-based localization and exible mission management facilitates successful missions independent of the Global Positioning System (GPS). This is especially important while driving between tight tree rows where the GPS coverage is poor. This paper suggests localization ...

  14. 6-DOF Pose Estimation of a Robotic Navigation Aid by Tracking Visual and Geometric Features.

    Science.gov (United States)

    Ye, Cang; Hong, Soonhac; Tamjidi, Amirhossein

    2015-10-01

    This paper presents a 6-DOF Pose Estimation (PE) method for a Robotic Navigation Aid (RNA) for the visually impaired. The RNA uses a single 3D camera for PE and object detection. The proposed method processes the camera's intensity and range data to estimates the camera's egomotion that is then used by an Extended Kalman Filter (EKF) as the motion model to track a set of visual features for PE. A RANSAC process is employed in the EKF to identify inliers from the visual feature correspondences between two image frames. Only the inliers are used to update the EKF's state. The EKF integrates the egomotion into the camera's pose in the world coordinate system. To retain the EKF's consistency, the distance between the camera and the floor plane (extracted from the range data) is used by the EKF as the observation of the camera's z coordinate. Experimental results demonstrate that the proposed method results in accurate pose estimates for positioning the RNA in indoor environments. Based on the PE method, a wayfinding system is developed for localization of the RNA in a home environment. The system uses the estimated pose and the floorplan to locate the RNA user in the home environment and announces the points of interest and navigational commands to the user through a speech interface. This work was motivated by the limitations of the existing navigation technology for the visually impaired. Most of the existing methods use a point/line measurement sensor for indoor object detection. Therefore, they lack capability in detecting 3D objects and positioning a blind traveler. Stereovision has been used in recent research. However, it cannot provide reliable depth data for object detection. Also, it tends to produce a lower localization accuracy because its depth measurement error quadratically increases with the true distance. This paper suggests a new approach for navigating a blind traveler. The method uses a single 3D time-of-flight camera for both 6-DOF PE and 3D object

  15. Positioning performance improvements with European multiple-frequency satellite navigation - Galileo

    Science.gov (United States)

    Ji, Shengyue

    2008-10-01

    The rapid development of Global Positioning System has demonstrated the advantages of satellite based navigation systems. In near future, there will be a number of Global Navigation Satellite System (GNSS) available, i.e. modernized GPS, Galileo, restored GLONASS, BeiDou and many other regional GNSS augmentation systems. Undoubtedly, the new GNSS systems will significantly improve navigation performance over current GPS, with a better satellite coverage and multiple satellite signal bands. In this dissertation, the positioning performance improvement of new GNSS has been investigated based on both theoretical analysis and numerical study. First of all, the navigation performance of new GNSS systems has been analyzed, particularly for urban applications. The study has demonstrated that Receiver Autonomous Integrity Monitoring (RAIM) performance can be significantly improved with multiple satellite constellations, although the position accuracy improvement is limited. Based on a three-dimensional urban building model in Hong Kong streets, it is found that positioning availability is still very low in high-rising urban areas, even with three GNSS systems. On the other hand, the discontinuity of navigation solutions is significantly reduced with the combined constellations. Therefore, it is possible to use cheap DR systems to bridge the gaps of GNSS positioning, with high accuracy. Secondly, the ambiguity resolution performance has been investigated with Galileo multiple frequency band signals. The ambiguity resolution performance of three different algorithms is compared, including CAR, ILS and improved CAR methods (a new method proposed in this study). For short baselines, with four frequency Galileo data, it is highly possible to achieve reliable single epoch ambiguity resolution, when the carrier phase noise level is reasonably low (i.e. less than 6mm). For long baselines (up to 800 km), the integer ambiguity can be determined within 1 min on average. Ambiguity

  16. Robot Actors, Robot Dramaturgies

    DEFF Research Database (Denmark)

    Jochum, Elizabeth

    This paper considers the use of tele-operated robots in live performance. Robots and performance have long been linked, from the working androids and automata staged in popular exhibitions during the nineteenth century and the robots featured at Cybernetic Serendipity (1968) and the World Expo...

  17. Redundant Sensors for Mobile Robot Navigation

    Science.gov (United States)

    1985-09-01

    represent a probability that the area is empty, while positive numbers mcan it’s probably occupied. Zero reprtsents the unknown. The basic idea is that...room to give it absolute positioning information. This works by using two infrared emitters and detectors on the robot. Measurements of anglcs are made...meters (T in Kelvin) 273 sec Distances returned when assuming 80 degrees Farenheit , but where. actual temperature is 60 degrees, will be seven inches

  18. Heterogeneous Multi-Robot System for Mapping Environmental Variables of Greenhouses.

    Science.gov (United States)

    Roldán, Juan Jesús; Garcia-Aunon, Pablo; Garzón, Mario; de León, Jorge; Del Cerro, Jaime; Barrientos, Antonio

    2016-07-01

    The productivity of greenhouses highly depends on the environmental conditions of crops, such as temperature and humidity. The control and monitoring might need large sensor networks, and as a consequence, mobile sensory systems might be a more suitable solution. This paper describes the application of a heterogeneous robot team to monitor environmental variables of greenhouses. The multi-robot system includes both ground and aerial vehicles, looking to provide flexibility and improve performance. The multi-robot sensory system measures the temperature, humidity, luminosity and carbon dioxide concentration in the ground and at different heights. Nevertheless, these measurements can be complemented with other ones (e.g., the concentration of various gases or images of crops) without a considerable effort. Additionally, this work addresses some relevant challenges of multi-robot sensory systems, such as the mission planning and task allocation, the guidance, navigation and control of robots in greenhouses and the coordination among ground and aerial vehicles. This work has an eminently practical approach, and therefore, the system has been extensively tested both in simulations and field experiments.

  19. Heterogeneous Multi-Robot System for Mapping Environmental Variables of Greenhouses

    Directory of Open Access Journals (Sweden)

    Juan Jesús Roldán

    2016-07-01

    Full Text Available The productivity of greenhouses highly depends on the environmental conditions of crops, such as temperature and humidity. The control and monitoring might need large sensor networks, and as a consequence, mobile sensory systems might be a more suitable solution. This paper describes the application of a heterogeneous robot team to monitor environmental variables of greenhouses. The multi-robot system includes both ground and aerial vehicles, looking to provide flexibility and improve performance. The multi-robot sensory system measures the temperature, humidity, luminosity and carbon dioxide concentration in the ground and at different heights. Nevertheless, these measurements can be complemented with other ones (e.g., the concentration of various gases or images of crops without a considerable effort. Additionally, this work addresses some relevant challenges of multi-robot sensory systems, such as the mission planning and task allocation, the guidance, navigation and control of robots in greenhouses and the coordination among ground and aerial vehicles. This work has an eminently practical approach, and therefore, the system has been extensively tested both in simulations and field experiments.

  20. HexaMob—A Hybrid Modular Robotic Design for Implementing Biomimetic Structures

    Directory of Open Access Journals (Sweden)

    Sasanka Sankhar Reddy CH.

    2017-10-01

    Full Text Available Modular robots are capable of forming primitive shapes such as lattice and chain structures with the additional flexibility of distributed sensing. The biomimetic structures developed using such modular units provides ease of replacement and reconfiguration in co-ordinated structures, transportation etc. in real life scenarios. Though the research in the employment of modular robotic units in formation of biological organisms is in the nascent stage, modular robotic units are already capable of forming such sophisticated structures. The modular robotic designs proposed so far in modular robotics research vary significantly in external structures, sensor-actuator mechanisms interfaces for docking and undocking, techniques for providing mobility, coordinated structures, locomotions etc. and each robotic design attempted to address various challenges faced in the domain of modular robotics by employing different strategies. This paper presents a novel modular wheeled robotic design - HexaMob facilitating four degrees of freedom (2 degrees for mobility and 2 degrees for structural reconfiguration on a single module with minimal usage of sensor-actuator assemblies. The crucial features of modular robotics such as back-driving restriction, docking, and navigation are addressed in the process of HexaMob design. The proposed docking mechanism is enabled using vision sensor, enhancing the capabilities in docking as well as navigation in co-ordinated structures such as humanoid robots.

  1. Robotic Software Integration Using MARIE

    Directory of Open Access Journals (Sweden)

    Carle Côté

    2006-03-01

    Full Text Available This paper presents MARIE, a middleware framework oriented towards developing and integrating new and existing software for robotic systems. By using a generic communication framework, MARIE aims to create a flexible distributed component system that allows robotics developers to share software programs and algorithms, and design prototypes rapidly based on their own integration needs. The use of MARIE is illustrated with the design of a socially interactive autonomous mobile robot platform capable of map building, localization, navigation, tasks scheduling, sound source localization, tracking and separation, speech recognition and generation, visual tracking, message reading and graphical interaction using a touch screen interface.

  2. Fuzzy Logic Supervised Teleoperation Control for Mobile Robot

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The supervised teleoperation control is presented for a mobile robot to implement the tasks by using fuzzy logic. The teleoperation control system includes joystick based user interaction mechanism, the high level instruction set and fuzzy logic behaviors integrated in a supervised autonomy teleoperation control system for indoor navigation. These behaviors include left wall following, right wall following, turn left, turn right, left obstacle avoidance, right obstacle avoidance and corridor following based on ultrasonic range finders data. The robot compares the instructive high level command from the operator and relays back a suggestive signal back to the operator in case of mismatch between environment and instructive command. This strategy relieves the operator's cognitive burden, handle unforeseen situations and uncertainties of environment autonomously. The effectiveness of the proposed method for navigation in an unstructured environment is verified by experiments conducted on a mobile robot equipped with only ultrasonic range finders for environment sensing.

  3. Robotic inspection technology-process an toolbox

    Energy Technology Data Exchange (ETDEWEB)

    Hermes, Markus [ROSEN Group (United States). R and D Dept.

    2005-07-01

    Pipeline deterioration grows progressively with ultimate aging of pipeline systems (on-plot and cross country). This includes both, very localized corrosion as well as increasing failure probability due to fatigue cracking. Limiting regular inspecting activities to the 'scrapable' part of the pipelines only, will ultimately result into a pipeline system with questionable integrity. The confidence level in the integrity of these systems will drop below acceptance levels. Inspection of presently un-inspectable sections of the pipeline system becomes a must. This paper provides information on ROSEN's progress on the 'robotic inspection technology' project. The robotic inspection concept developed by ROSEN is based on a modular toolbox principle. This is mandatory. A universal 'all purpose' robot would not be reliable and efficient in resolving the postulated inspection task. A preparatory Quality Function Deployment (QFD) analysis is performed prior to the decision about the adequate robotic solution. This enhances the serviceability and efficiency of the provided technology. The word 'robotic' can be understood in its full meaning of Recognition - Strategy - Motion - Control. Cooperation of different individual systems with an established communication, e.g. utilizing Bluetooth technology, support the robustness of the ROSEN robotic inspection approach. Beside the navigation strategy, the inspection strategy is also part of the QFD process. Multiple inspection technologies combined on a single carrier or distributed across interacting container must be selected with a clear vision of the particular goal. (author)

  4. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  5. Middleware Interoperability for Robotics: A ROS-YARP Framework

    Directory of Open Access Journals (Sweden)

    Plinio Moreno

    2016-10-01

    Full Text Available Middlewares are fundamental tools for progress in research and applications in robotics. They enable the integration of multiple heterogeneous sensing and actuation devices, as well as providing general purpose modules for key robotics functions (kinematics, navigation, planning. However, no existing middleware yet provides a complete set of functionalities for all robotics applications, and many robots may need to rely on more than one framework. This paper focuses on the interoperability between two of the most prevalent middleware in robotics: YARP and ROS. Interoperability between middlewares should ideally allow users to execute existing software without the necessity of: (i changing the existing code, and (ii writing hand-coded ``bridges'' for each use-case. We propose a framework enabling the communication between existing YARP modules and ROS nodes for robotics applications in an automated way. Our approach generates the ``bridging gap'' code from a configuration file, connecting YARP ports and ROS topics through code-generated YARP Bottles. %%The configuration file must describe: (i the sender entities, (ii the way to group and convert the information read from the sender, (iii the structure of the output message and (iv the receiving entity. Our choice for the many inputs to one output is the most common use-case in robotics applications, where examples include filtering, decision making and visualization. %We support YARP/ROS and ROS/YARP sender/receiver configurations, which are demonstrated in a humanoid on wheels robot that uses YARP for upper body motor control and visual perception, and ROS for mobile base control and navigation algorithms.

  6. Orchard navigation using derivative free Kalman filtering

    DEFF Research Database (Denmark)

    Hansen, Søren; Bayramoglu, Enis; Andersen, Jens Christian

    2011-01-01

    This paper describes the use of derivative free filters for mobile robot localization and navigation in an orchard. The localization algorithm fuses odometry and gyro measurements with line features representing the surrounding fruit trees of the orchard. The line features are created on basis of 2...

  7. CSIR Centre for Mining Innovation and the mine safety platform robot

    CSIR Research Space (South Africa)

    Green, JJ

    2012-11-01

    Full Text Available The Council for Scientific and Industrial Research (CSIR) in South Africa is currently developing a robot for the inspection of the ceiling (hanging wall) in an underground gold mine. The robot autonomously navigates the 30 meter long by 3 meter...

  8. Low computation vision-based navigation for a Martian rover

    Science.gov (United States)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  9. Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy

    Science.gov (United States)

    Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.

    2014-01-01

    This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian motion module. The needle driver provides needle cannula translation and rotation (2-DOF) and stylet translation (1-DOF). A custom robot controller consisting of multiple piezoelectric motor drivers provides precision closed-loop control of piezoelectric motors and enables simultaneous robot motion and MR imaging. The developed modular robot control interface software performs image-based registration, kinematics calculation, and exchanges robot commands and coordinates between the navigation software and the robot controller with a new implementation of the open network communication protocol OpenIGTLink. Comprehensive compatibility of the robot is evaluated inside a 3-Tesla MRI scanner using standard imaging sequences and the signal-to-noise ratio (SNR) loss is limited to 15%. The image deterioration due to the present and motion of robot demonstrates unobservable image interference. Twenty-five targeted needle placements inside gelatin phantoms utilizing an 18-gauge ceramic needle demonstrated 0.87 mm root mean square (RMS) error in 3D Euclidean distance based on MRI volume segmentation of the image-guided robotic needle placement procedure. PMID:26412962

  10. Adaptive Iterated Extended Kalman Filter and Its Application to Autonomous Integrated Navigation for Indoor Robot

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2014-01-01

    Full Text Available As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF which used the noise statistics estimator in the iterated extended Kalman (IEKF, and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS/wireless sensors networks (WSNs-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF.

  11. A Novel Telemanipulated Robotic Assistant for Surgical Endoscopy: Preclinical Application to ESD.

    Science.gov (United States)

    Zorn, Lucile; Nageotte, Florent; Zanne, Philippe; Legner, Andras; Dallemagne, Bernard; Marescaux, Jacques; de Mathelin, Michel

    2018-04-01

    Minimally invasive surgical interventions in the gastrointestinal tract, such as endoscopic submucosal dissection (ESD), are very difficult for surgeons when performed with standard flexible endoscopes. Robotic flexible systems have been identified as a solution to improve manipulation. However, only a few such systems have been brought to preclinical trials as of now. As a result, novel robotic tools are required. We developed a telemanipulated robotic device, called STRAS, which aims to assist surgeons during intraluminal surgical endoscopy. This is a modular system, based on a flexible endoscope and flexible instruments, which provides 10 degrees of freedom (DoFs). The modularity allows the user to easily set up the robot and to navigate toward the operating area. The robot can then be teleoperated using master interfaces specifically designed to intuitively control all available DoFs. STRAS capabilities have been tested in laboratory conditions and during preclinical experiments. We report 12 colorectal ESDs performed in pigs, in which large lesions were successfully removed. Dissection speeds are compared with those obtained in similar conditions with the manual Anubiscope platform from Karl Storz. We show significant improvements ( ). These experiments show that STRAS (v2) provides sufficient DoFs, workspace, and force to perform ESD, that it allows a single surgeon to perform all the surgical tasks and those performances are improved with respect to manual systems. The concepts developed for STRAS are validated and could bring new tools for surgeons to improve comfort, ease, and performances for intraluminal surgical endoscopy.

  12. JACoW A dual arms robotic platform control for navigation, inspection and telemanipulation

    CERN Document Server

    Di Castro, Mario; Ferre, Manuel; Gilardoni, Simone; Losito, Roberto; Lunghi, Giacomo; Masi, Alessandro

    2018-01-01

    High intensity hadron colliders and fixed target experiments require an increasing amount of robotic tele-manipulation to prevent excessive exposure of maintenance personnel to the radioactive environment. Telemanipulation tasks are often required on old radioactive devices not conceived to be maintained and handled using standard industrial robotic solutions. Robotic platforms with a level of dexterity that often require the use of two robotic arms with a minimum of six degrees of freedom are instead needed for these purposes. In this paper, the control of a novel robust robotic platform able to host and to carry safely a dual robotic arm system is presented. The control of the arms is fully integrated with the vehicle control in order to guarantee simplicity to the operators during the realization of the robotic tasks. A novel high-level control architecture for the new robot is shown, as well as a novel low level safety layer for anti-collision and recovery scenarios. Preliminary results of the system comm...

  13. Design of the Dual Offset Active Caster Wheel for Holonomic Omni-Directional Mobile Robots

    Directory of Open Access Journals (Sweden)

    Woojin Chung

    2010-12-01

    Full Text Available It is shown how a holonomic and omni-directional mobile robot is designed towards indoor public services. Dual offset steerable wheels with orthogonal velocity components are proposed. The proposed wheel provides precise positioning and reliable navigation performance as well as durability. A fabricated prototype is introduced, then, an experiment is carried out.

  14. Human Performance Assessments when Using Augmented Reality for Navigation

    National Research Council Canada - National Science Library

    Goldiez, Brian F; Saptoka, Nabin; Aedunuthula, Prashanth

    2006-01-01

    Human performance executing search and rescue type of navigation is one area that can benefit from augmented reality technology when the proper computer generated information is added to a real scene...

  15. Hovering by Gazing: A Novel Strategy for Implementing Saccadic Flight-Based Navigation in GPS-Denied Environments

    Directory of Open Access Journals (Sweden)

    Augustin Manecy

    2014-04-01

    Full Text Available Hovering flies are able to stay still in place when hovering above flowers and burst into movement towards a new object of interest (a target. This suggests that sensorimotor control loops implemented onboard could be usefully mimicked for controlling Unmanned Aerial Vehicles (UAVs. In this study, the fundamental head-body movements occurring in free-flying insects was simulated in a sighted twin-engine robot with a mechanical decoupling inserted between its eye (or gaze and its body. The robot based on this gaze control system achieved robust and accurate hovering performances, without an accelerometer, over a ground target despite a narrow eye field of view (±5°. The gaze stabilization strategy validated under Processor-In-the-Loop (PIL and inspired by three biological Oculomotor Reflexes (ORs enables the aerial robot to lock its gaze onto a fixed target regardless of its roll angle. In addition, the gaze control mechanism allows the robot to perform short range target to target navigation by triggering an automatic fast “target jump” behaviour based on a saccadic eye movement.

  16. Forgetting Bad Behavior: Memory Management for Case-Based Navigation

    National Research Council Canada - National Science Library

    Kira, Zsolt; Arkin, Ronald C

    2006-01-01

    ...) system applied to autonomous robot navigation. This extends previous work that involved a CBR architecture that indexes cases by the spatio-temporal characteristics of the sensor data, and outputs or selects parameters of behaviors in a behavior...

  17. The effect of music on robot-assisted laparoscopic surgical performance.

    Science.gov (United States)

    Siu, Ka-Chun; Suh, Irene H; Mukherjee, Mukul; Oleynikov, Dmitry; Stergiou, Nick

    2010-12-01

    Music is often played in the operating room to increase the surgeon's concentration and to mask noise. It could have a beneficial effect on surgical performance. Ten participants with limited experience with the da Vinci robotic surgical system were recruited to perform two surgical tasks: suture tying and mesh alignment when classical, jazz, hip-hop, and Jamaican music were presented. Kinematics of the instrument tips of the surgical robot and surface electromyography of the subjects were recorded. Results revealed that a significant music effect was found for both tasks with decreased time to task completion (P = .005) and total travel distance (P = .021) as well as reduced muscle activations ( P = .016) and increased median muscle frequency (P = .034). Subjects improved their performance significantly when they listened to either hip-hop or Jamaican music. In conclusion, music with high rhythmicity has a beneficial effect on robotic surgical performance. Musical environment may benefit surgical training and make acquisition of surgical skills more efficient.

  18. Adaptive Human-Aware Robot Navigation in Close Proximity to Humans

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2011-01-01

    For robots to be able coexist with people in future everyday human environments, they must be able to act in a safe, natural and comfortable way. This work addresses the motion of a mobile robot in an environment, where humans potentially want to interact with it. The designed system consists...... system that uses a potential field to derive motion that respects the personʹs social zones and perceived interest in interaction. The operation of the system is evaluated in a controlled scenario in an open hall environment. It is demonstrated that the robot is able to learn to estimate if a person...... wishes to interact, and that the system is capable of adapting to changing behaviours of the humans in the environment....

  19. High Precision GNSS Guidance for Field Mobile Robots

    Directory of Open Access Journals (Sweden)

    Ladislav Jurišica

    2012-11-01

    Full Text Available In this paper, we discuss GNSS (Global Navigation Satellite System guidance for field mobile robots. Several GNSS systems and receivers, as well as multiple measurement methods and principles of GNSS systems are examined. We focus mainly on sources of errors and investigate diverse approaches for precise measuring and effective use of GNSS systems for real-time robot localization. The main body of the article compares two GNSS receivers and their measurement methods. We design, implement and evaluate several mathematical methods for precise robot localization.

  20. Development and application of underwater robot vehicle for close inspection of spent fuels

    Energy Technology Data Exchange (ETDEWEB)

    Yun, J. S.; Park, B. S.; Song, T. G.; Kim, S. H.; Cho, M. W.; Ahn, S. H.; Lee, J. Y.; Oh, S. C.; Oh, W. J.; Shin, K. W.; Woo, D. H.; Kim, H. G.; Park, J. S

    1999-12-01

    The research and development efforts of the underwater robotic vehicle for inspection of spent fuels are focused on the development of an robotic vehicle which inspects spent fuels in the storage pool through remotely controlled actuation. For this purpose, a self balanced vehicle actuated by propellers is designed and fabricated, which consists of a radiation resistance camera, two illuminators, a pressure transducer and a manipulator. the algorithm for autonomous navigation is developed and its performance is tested at the swimming pool. The results of the underwater vehicle shows that the vehicle can easily navigate into the arbitrary directions while maintaining its balanced position. The camera provides a clear view of working environment by using the macro and zoom functions. The camera tilt device provides a wide field of view which is enough for monitoring the operation of manipulator. Also, the manipulator can pick up the dropped objects up to 4 kgf of weight. (author)

  1. Augmented-reality integrated robotics in neurosurgery: are we there yet?

    Science.gov (United States)

    Madhavan, Karthik; Kolcun, John Paul G; Chieng, Lee Onn; Wang, Michael Y

    2017-05-01

    Surgical robots have captured the interest-if not the widespread acceptance-of spinal neurosurgeons. But successful innovation, scientific or commercial, requires the majority to adopt a new practice. "Faster, better, cheaper" products should in theory conquer the market, but often fail. The psychology of change is complex, and the "follow the leader" mentality, common in the field today, lends little trust to the process of disseminating new technology. Beyond product quality, timing has proven to be a key factor in the inception, design, and execution of new technologies. Although the first robotic surgery was performed in 1985, scant progress was seen until the era of minimally invasive surgery. This movement increased neurosurgeons' dependence on navigation and fluoroscopy, intensifying the drive for enhanced precision. Outside the field of medicine, various technology companies have made great progress in popularizing co-robots ("cobots"), augmented reality, and processor chips. This has helped to ease practicing surgeons into familiarity with and acceptance of these technologies. The adoption among neurosurgeons in training is a "follow the leader" phenomenon, wherein new surgeons tend to adopt the technology used during residency. In neurosurgery today, robots are limited to computers functioning between the surgeon and patient. Their functions are confined to establishing a trajectory for navigation, with task execution solely in the surgeon's hands. In this review, the authors discuss significant untapped technologies waiting to be used for more meaningful applications. They explore the history and current manifestations of various modern technologies, and project what innovations may lie ahead.

  2. Air Force construction automation/robotics

    Science.gov (United States)

    Nease, AL; Dusseault, Christopher

    1994-01-01

    The Air Force has several unique requirements that are being met through the development of construction robotic technology. The missions associated with these requirements place construction/repair equipment operators in potentially harmful situations. Additionally, force reductions require that human resources be leveraged to the maximum extent possible and that more stringent construction repair requirements push for increased automation. To solve these problems, the U.S. Air Force is undertaking a research and development effort at Tyndall AFB, FL to develop robotic teleoperation, telerobotics, robotic vehicle communications, automated damage assessment, vehicle navigation, mission/vehicle task control architecture, and associated computing environment. The ultimate goal is the fielding of robotic repair capability operating at the level of supervised autonomy. The authors of this paper will discuss current and planned efforts in construction/repair, explosive ordnance disposal, hazardous waste cleanup, fire fighting, and space construction.

  3. Augmented reality user interface for mobile ground robots with manipulator arms

    Science.gov (United States)

    Vozar, Steven; Tilbury, Dawn M.

    2011-01-01

    Augmented Reality (AR) is a technology in which real-world visual data is combined with an overlay of computer graphics, enhancing the original feed. AR is an attractive tool for teleoperated UGV UIs as it can improve communication between robots and users via an intuitive spatial and visual dialogue, thereby increasing operator situational awareness. The successful operation of UGVs often relies upon both chassis navigation and manipulator arm control, and since existing literature usually focuses on one task or the other, there is a gap in mobile robot UIs that take advantage of AR for both applications. This work describes the development and analysis of an AR UI system for a UGV with an attached manipulator arm. The system supplements a video feed shown to an operator with information about geometric relationships within the robot task space to improve the operator's situational awareness. Previous studies on AR systems and preliminary analyses indicate that such an implementation of AR for a mobile robot with a manipulator arm is anticipated to improve operator performance. A full user-study can determine if this hypothesis is supported by performing an analysis of variance on common test metrics associated with UGV teleoperation.

  4. A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation

    National Research Council Canada - National Science Library

    Rosenblum, Mark; Gothard, Benny

    2006-01-01

    .... In the military sense, appropriate navigation implies the robot will avoid collision or contact with hazards, will not be falsely re-routed around traversible terrain due to false hazard detections...

  5. Algorithms for Design of Continuum Robots Using the Concentric Tubes Approach: A Neurosurgical Example.

    Science.gov (United States)

    Anor, Tomer; Madsen, Joseph R; Dupont, Pierre

    2011-05-09

    We propose a novel systematic approach to optimizing the design of concentric tube robots for neurosurgical procedures. These procedures require that the robot approach specified target sites while navigating and operating within an anatomically constrained work space. The availability of preoperative imaging makes our approach particularly suited for neurosurgery, and we illustrate the method with the example of endoscopic choroid plexus ablation. A novel parameterization of the robot characteristics is used in conjunction with a global pattern search optimization method. The formulation returns the design of the least-complex robot capable of reaching single or multiple target points in a confined space with constrained optimization metrics. A particular advantage of this approach is that it identifies the need for either fixed-curvature versus variable-curvature sections. We demonstrate the performance of the method in four clinically relevant examples.

  6. Robot assisted navigated drilling for percutaneous pedicle screw placement: A preliminary animal study

    Directory of Open Access Journals (Sweden)

    Hongwei Wang

    2015-01-01

    Conclusions: The preliminary study supports the view that computer assisted pedicle screw fixation using spinal robot is feasible and the robot can decrease the intraoperative fluoroscopy time during the minimally invasive pedicle screw fixation surgery. As spine robotic surgery is still in its infancy, further research in this field is worthwhile especially the accuracy of spine robot system should be improved.

  7. Robosapien Robot used to Model Humanoid Interaction to Perform tasks in Dangerous Manufacturing Environments

    International Nuclear Information System (INIS)

    Stopforth, R; Bright, G

    2014-01-01

    Humans are involved with accidents in manufacturing environments. A possibility to prevent humans from these scenarios is, to introduce humanoid robots within these industrial areas. This paper investigates the control scenario and environments required at a small scale level, with the use of the Robosapien robot. The Robosapien robot is modified to control it with a task of removing a cylinder and inserting it into a hole. Analysis is performed on the performance of the Robosapien robot and relating it with that of a humanoid robot. A discussion with suggestions is concluded with the efficiency and profitability that would need to be considered, for having a humanoid robot within the manufacturing environment

  8. Human-like robots for space and hazardous environments

    Science.gov (United States)

    1994-01-01

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  9. CLARAty: Challenges and Steps Toward Reusable Robotic Software

    Directory of Open Access Journals (Sweden)

    Richard Madison

    2008-11-01

    Full Text Available We present in detail some of the challenges in developing reusable robotic software. We base that on our experience in developing the CLARAty robotics software, which is a generic object-oriented framework used for the integration of new algorithms in the areas of motion control, vision, manipulation, locomotion, navigation, localization, planning and execution. CLARAty was adapted to a number of heterogeneous robots with different mechanisms and hardware control architectures. In this paper, we also describe how we addressed some of these challenges in the development of the CLARAty software.

  10. CLARAty: Challenges and Steps toward Reusable Robotic Software

    Directory of Open Access Journals (Sweden)

    Issa A.D. Nesnas

    2006-03-01

    Full Text Available We present in detail some of the challenges in developing reusable robotic software. We base that on our experience in developing the CLARAty robotics software, which is a generic object-oriented framework used for the integration of new algorithms in the areas of motion control, vision, manipulation, locomotion, navigation, localization, planning and execution. CLARAty was adapted to a number of heterogeneous robots with different mechanisms and hardware control architectures. In this paper, we also describe how we addressed some of these challenges in the development of the CLARAty software.

  11. A Case Study on a Capsule Robot in the Gastrointestinal Tract to Teach Robot Programming and Navigation

    Science.gov (United States)

    Guo, Yi; Zhang, Shubo; Ritter, Arthur; Man, Hong

    2014-01-01

    Despite the increasing importance of robotics, there is a significant challenge involved in teaching this to undergraduate students in biomedical engineering (BME) and other related disciplines in which robotics techniques could be readily applied. This paper addresses this challenge through the development and pilot testing of a bio-microrobotics…

  12. Experiences with a Barista Robot, FusionBot

    Science.gov (United States)

    Limbu, Dilip Kumar; Tan, Yeow Kee; Wong, Chern Yuen; Jiang, Ridong; Wu, Hengxin; Li, Liyuan; Kah, Eng Hoe; Yu, Xinguo; Li, Dong; Li, Haizhou

    In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.

  13. A neural network-based exploratory learning and motor planning system for co-robots

    Directory of Open Access Journals (Sweden)

    Byron V Galbraith

    2015-07-01

    Full Text Available Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or learning by doing, an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  14. A neural network-based exploratory learning and motor planning system for co-robots.

    Science.gov (United States)

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  15. 4th IFToMM International Symposium on Robotics and Mechatronics

    CERN Document Server

    Laribi, Med; Gazeau, Jean-Pierre

    2016-01-01

    This volume contains papers that have been selected after review for oral presentation at ISRM 2015, the Fourth IFToMM International Symposium on Robotics and Mechatronics held in Poitiers, France 23-24 June 2015. These papers  provide a vision of the evolution of the disciplines of robotics and mechatronics, including but not limited to: mechanism design; modeling and simulation; kinematics and dynamics of multibody systems; control methods; navigation and motion planning; sensors and actuators; bio-robotics; micro/nano-robotics; complex robotic systems; walking machines, humanoids-parallel kinematic structures: analysis and synthesis; smart devices; new design; application and prototypes. The book can be used by researchers and engineers in the relevant areas of robotics and mechatronics.

  16. Performances on simulator and da Vinci robot on subjects with and without surgical background.

    Science.gov (United States)

    Moglia, Andrea; Ferrari, Vincenzo; Melfi, Franca; Ferrari, Mauro; Mosca, Franco; Cuschieri, Alfred; Morelli, Luca

    2017-08-17

    To assess whether previous training in surgery influences performance on da Vinci Skills Simulator and da Vinci robot. In this prospective study, thirty-seven participants (11 medical students, 17 residents, and 9 attending surgeons) without previous experience in laparoscopy and robotic surgery performed 26 exercises at da Vinci Skills Simulator. Thirty-five then executed a suture using a da Vinci robot. The overall scores on the exercises at the da Vinci Skills Simulator show a similar performance among the groups with no statistically significant pair-wise differences (p poor for the untrained groups (5 (3.5, 9)), without statistically significant difference (p < .05). This study showed, for subjects new to laparoscopy and robotic surgery, insignificant differences in the scores at the da Vinci Skills Simulator and at the da Vinci robot on inanimate models.

  17. Development of the first force-controlled robot for otoneurosurgery.

    Science.gov (United States)

    Federspil, Philipp A; Geisthoff, Urban W; Henrich, Dominik; Plinkert, Peter K

    2003-03-01

    In some surgical specialties (eg, orthopedics), robots are already used in the operating room for bony milling work. Otological surgery and otoneurosurgery may also greatly benefit from the enhanced precision of robotics. Experimental study on robotic milling of oak wood and human temporal bone specimen. A standard industrial robot with a six-degrees-of-freedom serial kinematics was used, with force feedback to proportionally control the robot speed. Different milling modes and characteristic path parameters were evaluated to generate milling paths based on computer-aided design (CAD) geometry data of a cochlear implant and an implantable hearing system. The best-suited strategy proved to be the spiral horizontal milling mode with the burr held perpendicular to the temporal bone surface. To reduce groove height, the distance between paths should equal half the radius of the cutting burr head. Because of the vibration of the robot's own motors, a high oscillation of the SD of forces was encountered. This oscillation dropped drastically to nearly 0 Newton (N) when the burr head made contact with the dura mater, because of its damping characteristics. The cutting burr could be kept in contact with the dura mater for an extended period without damaging it, because of the burr's blunt head form. The robot moved the burr smoothly according to the encountered resistances. The study reports the first development of a functional robotic milling procedure for otoneurosurgery with force-based speed control. Future plans include implementation of ultrasound-based local navigation and performance of robotic mastoidectomy.

  18. Ultra-Wideband Tracking System Design for Relative Navigation

    Science.gov (United States)

    Ni, Jianjun David; Arndt, Dickey; Bgo, Phong; Dekome, Kent; Dusl, John

    2011-01-01

    This presentation briefly discusses a design effort for a prototype ultra-wideband (UWB) time-difference-of-arrival (TDOA) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being designed for use in localization and navigation of a rover in a GPS deprived environment for surface missions. In one application enabled by the UWB tracking, a robotic vehicle carrying equipments can autonomously follow a crewed rover from work site to work site such that resources can be carried from one landing mission to the next thereby saving up-mass. The UWB Systems Group at JSC has developed a UWB TDOA High Resolution Proximity Tracking System which can achieve sub-inch tracking accuracy of a target within the radius of the tracking baseline [1]. By extending the tracking capability beyond the radius of the tracking baseline, a tracking system is being designed to enable relative navigation between two vehicles for surface missions. A prototype UWB TDOA tracking system has been designed, implemented, tested, and proven feasible for relative navigation of robotic vehicles. Future work includes testing the system with the application code to increase the tracking update rate and evaluating the linear tracking baseline to improve the flexibility of antenna mounting on the following vehicle.

  19. Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Mo, Se Hyun [Amotech, Seoul (Korea, Republic of); Jeon, Young Pil [Samsung Electronics Co., Ltd. Suwon (Korea, Republic of); Park, Jong Ho [Seonam Univ., Namwon (Korea, Republic of); Chong, Kil To [Chon-buk Nat' 1 Univ., Junju (Korea, Republic of)

    2017-07-15

    With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

  20. Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates

    International Nuclear Information System (INIS)

    Mo, Se Hyun; Jeon, Young Pil; Park, Jong Ho; Chong, Kil To

    2017-01-01

    With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

  1. A 3-D Miniature LIDAR System for Mobile Robot Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Future lunar initiatives will demand sophisticated operation of mobile robotics platforms. In particular, lunar site operations will benefit from robots, both...

  2. Developing Autonomous Vehicles That Learn to Navigate by Mimicking Human Behavior

    Science.gov (United States)

    2006-09-28

    navigate in an unstructured environment to a specific target or location. 15. SUBJECT TERMS autonomous vehicles , fuzzy logic, learning behavior...ANSI-Std Z39-18 Developing Autonomous Vehicles That Learn to Navigate by Mimicking Human Behavior FINAL REPORT 9/28/2006 Dean B. Edwards Department...the future, as greater numbers of autonomous vehicles are employed, it is hoped that lower LONG-TERM GOALS Use LAGR (Learning Applied to Ground Robots

  3. Allothetic and idiothetic sensor fusion in rat-inspired robot localization

    Science.gov (United States)

    Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo

    2012-06-01

    We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.

  4. Mobile Robots for Hospital Logistics

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan

    services to maintain the quality of healthcare provided. Logistics is the most resource demanding service in a hospital. The scale of the transportation tasks is huge and the material flow in a hospital is comparable to that of a factory. We believe that these transportation tasks, to a great extent, can...... be and will be automated using mobile robots. This talk consequently addresses the key technical issues of implementing service robots in hospitals. In simple terms, a robotic system for automating hospital logistics has to be reliable, adaptable and scalable. Robots have to be semi-autonomous, and should reliably...... navigate in large and dynamic environments in the hospital. The complexity of the problem has to be manageable, and the solutions have to be flexible, so that the system can be applicable in real world settings. This talk summarizes the efforts to address these issues. Upon the analysis...

  5. Drum inspection robots: Application development

    International Nuclear Information System (INIS)

    Hazen, F.B.; Warner, R.D.

    1996-01-01

    Throughout the Department of Energy (DOE), drums containing mixed and low level stored waste are inspected, as mandated by the Resource Conservation and Recovery Act (RCRA) and other regulations. The inspections are intended to prevent leaks by finding corrosion long before the drums are breached. The DOE Office of Science and Technology (OST) has sponsored efforts towards the development of robotic drum inspectors. This emerging application for mobile and remote sensing has broad applicability for DOE and commercial waste storage areas. Three full scale robot prototypes have been under development, and another project has prototyped a novel technique to analyze robotically collected drum images. In general, the robots consist of a mobile, self-navigating base vehicle, outfitted with sensor packages so that rust and other corrosion cues can be automatically identified. They promise the potential to lower radiation dose and operator effort required, while improving diligence, consistency, and documentation

  6. Real-time simulation for intra-operative navigation in robotic surgery. Using a mass spring system for a basic study of organ deformation.

    Science.gov (United States)

    Kawamura, Kazuya; Kobayashi, Yo; Fujie, Masakatsu G

    2007-01-01

    Medical technology has advanced with the introduction of robot technology, making previous medical treatments that were very difficult far more possible. However, operation of a surgical robot demands substantial training and continual practice on the part of the surgeon because it requires difficult techniques that are different from those of traditional surgical procedures. We focused on a simulation technology based on the physical characteristics of organs. In this research, we proposed the development of surgical simulation, based on a physical model, for intra-operative navigation by a surgeon. In this paper, we describe the design of our system, in particular our organ deformation calculator. The proposed simulation system consists of an organ deformation calculator and virtual slave manipulators. We obtained adequate experimental results of a target node at a nearby point of interaction, because this point ensures better accuracy for our simulation model. The next research step would be to focus on a surgical environment in which internal organ models would be integrated into a slave simulation system.

  7. Application requirements for Robotic Nursing Assistants in hospital environments

    Science.gov (United States)

    Cremer, Sven; Doelling, Kris; Lundberg, Cody L.; McNair, Mike; Shin, Jeongsik; Popa, Dan

    2016-05-01

    In this paper we report on analysis toward identifying design requirements for an Adaptive Robotic Nursing Assistant (ARNA). Specifically, the paper focuses on application requirements for ARNA, envisioned as a mobile assistive robot that can navigate hospital environments to perform chores in roles such as patient sitter and patient walker. The role of a sitter is primarily related to patient observation from a distance, and fetching objects at the patient's request, while a walker provides physical assistance for ambulation and rehabilitation. The robot will be expected to not only understand nurse and patient intent but also close the decision loop by automating several routine tasks. As a result, the robot will be equipped with sensors such as distributed pressure sensitive skins, 3D range sensors, and so on. Modular sensor and actuator hardware configured in the form of several multi-degree-of-freedom manipulators, and a mobile base are expected to be deployed in reconfigurable platforms for physical assistance tasks. Furthermore, adaptive human-machine interfaces are expected to play a key role, as they directly impact the ability of robots to assist nurses in a dynamic and unstructured environment. This paper discusses required tasks for the ARNA robot, as well as sensors and software infrastructure to carry out those tasks in the aspects of technical resource availability, gaps, and needed experimental studies.

  8. Learning for Autonomous Navigation

    Science.gov (United States)

    Angelova, Anelia; Howard, Andrew; Matthies, Larry; Tang, Benyang; Turmon, Michael; Mjolsness, Eric

    2005-01-01

    Robotic ground vehicles for outdoor applications have achieved some remarkable successes, notably in autonomous highway following (Dickmanns, 1987), planetary exploration (1), and off-road navigation on Earth (1). Nevertheless, major challenges remain to enable reliable, high-speed, autonomous navigation in a wide variety of complex, off-road terrain. 3-D perception of terrain geometry with imaging range sensors is the mainstay of off-road driving systems. However, the stopping distance at high speed exceeds the effective lookahead distance of existing range sensors. Prospects for extending the range of 3-D sensors is strongly limited by sensor physics, eye safety of lasers, and related issues. Range sensor limitations also allow vehicles to enter large cul-de-sacs even at low speed, leading to long detours. Moreover, sensing only terrain geometry fails to reveal mechanical properties of terrain that are critical to assessing its traversability, such as potential for slippage, sinkage, and the degree of compliance of potential obstacles. Rovers in the Mars Exploration Rover (MER) mission have got stuck in sand dunes and experienced significant downhill slippage in the vicinity of large rock hazards. Earth-based off-road robots today have very limited ability to discriminate traversable vegetation from non-traversable vegetation or rough ground. It is impossible today to preprogram a system with knowledge of these properties for all types of terrain and weather conditions that might be encountered.

  9. Underground mine navigation using an integrated IMU/TOF system with unscented Kalman filter

    CSIR Research Space (South Africa)

    Hlophe, K

    2011-07-01

    Full Text Available & Factories of the Future Conference, 26-28 July 2011, Kuala Lumpur, Malaysia improve mine safety?, in 25th International Conference of CAD/CAM, Robotics & Factories of the Future, Pretoria, 2010. [2] J. J. Green and D. Vogt, Robot miner for low... Page 1 of 11 26th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 26-28 July 2011, Kuala Lumpur, Malaysia UNDERGROUND MINE NAVIGATION USING AN INTERGRATED IMU/TOF SYSTEM WITH UNSCENTED KALMAN FILTER...

  10. Performance evaluation of an improved fish robot actuated by piezoceramic actuators

    International Nuclear Information System (INIS)

    Nguyen, Q S; Heo, S; Park, H C; Byun, D

    2010-01-01

    This paper presents an improved fish robot actuated by four lightweight piezocomposite actuators. Our newly developed actuation mechanism is simple to fabricate because it works without gears. With the new actuation mechanism, the fish robot has a 30% smaller cross section than our previous model. Performance tests of the fish robot in water were carried out to measure the tail-beat angle, the thrust force, the swimming speed for various tail-beat frequencies from 1 to 5 Hz and the turning radius at the optimal frequency. The maximum swimming speed of the fish robot is 7.7 cm s −1 at a tail-beat frequency of 3.9 Hz. A turning experiment shows that the swimming direction of the fish robot can be controlled by changing the duty ratio of the driving voltage; the fish robot has a turning radius of 0.41 m for a left turn and 0.68 m for a right turn

  11. Performance evaluation of an improved fish robot actuated by piezoceramic actuators

    Science.gov (United States)

    Nguyen, Q. S.; Heo, S.; Park, H. C.; Byun, D.

    2010-03-01

    This paper presents an improved fish robot actuated by four lightweight piezocomposite actuators. Our newly developed actuation mechanism is simple to fabricate because it works without gears. With the new actuation mechanism, the fish robot has a 30% smaller cross section than our previous model. Performance tests of the fish robot in water were carried out to measure the tail-beat angle, the thrust force, the swimming speed for various tail-beat frequencies from 1 to 5 Hz and the turning radius at the optimal frequency. The maximum swimming speed of the fish robot is 7.7 cm s - 1 at a tail-beat frequency of 3.9 Hz. A turning experiment shows that the swimming direction of the fish robot can be controlled by changing the duty ratio of the driving voltage; the fish robot has a turning radius of 0.41 m for a left turn and 0.68 m for a right turn.

  12. Performance of high-level and low-level control for coordination of mobile robots

    NARCIS (Netherlands)

    Adinandra, S.; Caarls, J.; Kostic, D.; Nijmeijer, H.

    2010-01-01

    We analyze performance of different strategies for coordinated control of mobile robots. By considering an environment of a distribution center, the robots should transport goods from place A to place B while maintaining the desired formation and avoiding collisions. We evaluate performance of two

  13. Better Drumming Through Calibration: Techniques for Pre-Performance Robotic Percussion Optimization

    OpenAIRE

    Murphy, Jim; Kapur, Ajay; Carnegie, Dale

    2012-01-01

    A problem with many contemporary musical robotic percussion systems lies in the fact that solenoids fail to respond lin-early to linear increases in input velocity. This nonlinearity forces performers to individually tailor their compositions to specific robotic drummers. To address this problem, we introduce a method of pre-performance calibration using metaheuristic search techniques. A variety of such techniques are introduced and evaluated and the results of the optimized solenoid-based p...

  14. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  15. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    Science.gov (United States)

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  16. Patterns of task and network actions performed by navigators to facilitate cancer care.

    Science.gov (United States)

    Clark, Jack A; Parker, Victoria A; Battaglia, Tracy A; Freund, Karen M

    2014-01-01

    Patient navigation is a widely implemented intervention to facilitate access to care and reduce disparities in cancer care, but the activities of navigators are not well characterized. The aim of this study is to describe what patient navigators actually do and explore patterns of activity that clarify the roles they perform in facilitating cancer care. We conducted field observations of nine patient navigation programs operating in diverse health settings of the national patient navigation research program, including 34 patient navigators, each observed an average of four times. Trained observers used a structured observation protocol to code as they recorded navigator actions and write qualitative field notes capturing all activities in 15-minute intervals during observations ranging from 2 to 7 hours; yielding a total of 133 observations. Rates of coded activity were analyzed using numerical cluster analysis of identified patterns, informed by qualitative analysis of field notes. Six distinct patterns of navigator activity were identified, which differed most relative to how much time navigators spent directly interacting with patients and how much time they spent dealing with medical records and documentation tasks. Navigator actions reveal a complex set of roles in which navigators both provide the direct help to patients denoted by their title and also carry out a variety of actions that function to keep the health system operating smoothly. Working to navigate patients through complex health services entails working to repair the persistent challenges of health services that can render them inhospitable to patients. The organizations that deploy navigators might learn from navigators' efforts and explore alternative approaches, structures, or systems of care in addressing both the barriers patients face and the complex solutions navigators create in helping patients.

  17. A Reactive Robot Architecture With Planning on Demand

    National Research Council Canada - National Science Library

    Ranganathan, Ananth; Koenig, Sven

    2003-01-01

    In this paper, we describe a reactive robot architecture that uses fast re-planning methods to avoid the shortcomings of reactive navigation, such as getting stuck in box canyons or in front of small openings...

  18. Sensors Fusion based Online Mapping and Features Extraction of Mobile Robot in the Road Following and Roundabout

    International Nuclear Information System (INIS)

    Ali, Mohammed A H; Yussof, Wan Azhar B.; Hamedon, Zamzuri B; Yussof, Zulkifli B.; Majeed, Anwar P P; Mailah, Musa

    2016-01-01

    A road feature extraction based mapping system using a sensor fusion technique for mobile robot navigation in road environments is presented in this paper. The online mapping of mobile robot is performed continuously in the road environments to find the road properties that enable the robot to move from a certain start position to pre-determined goal while discovering and detecting the roundabout. The sensors fusion involving laser range finder, camera and odometry which are installed in a new platform, are used to find the path of the robot and localize it within its environments. The local maps are developed using camera and laser range finder to recognize the roads borders parameters such as road width, curbs and roundabout. Results show the capability of the robot with the proposed algorithms to effectively identify the road environments and build a local mapping for road following and roundabout. (paper)

  19. Navigating beyond ‘here & now’ affordances - on sensorimotor maturation and ‘false belief’ performance

    Directory of Open Access Journals (Sweden)

    Maria eBrincker

    2014-12-01

    Full Text Available How and when do we learn to understand other people’s perspectives and possibly divergent beliefs? This question has elicited much theoretical and empirical research. A puzzling finding has been that toddlers perform well on so-called implicit false belief (FB tasks but do not show such capacities on traditional explicit FB tasks. I propose a navigational approach, which offers a hitherto ignored way of making sense of the seemingly contradictory results. The proposal involves a distinction between how we navigate FBs as they relate to 1 our current affordances (here & now navigation as opposed to 2 presently non-actual relations, where we need to leave our concrete embodied/situated viewpoint (counterfactual navigation. It is proposed that whereas toddlers seem able to understand FBs in their current affordance space, they do not yet possess the resources to navigate in abstraction from such concrete affordances, which explicit FB tests seem to require. It is hypothesized that counterfactual navigation depends on the development of ‘sensorimotor priors’, i.e. statistical expectations of own kinestetic re-afference, which evidence now suggests matures around age four, consistent with core findings of explicit FB performance.

  20. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    Science.gov (United States)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  1. YARP-ROS Inter-Operation in a 2D Navigation Task

    Directory of Open Access Journals (Sweden)

    Marco Randazzo

    2018-02-01

    Full Text Available This paper presents some recent developments in YARP middleware, aimed to improve its integration with ROS. They include a new mechanism to read/write ROS transform frames and a new set of standard interfaces to intercommunicate with the ROS navigation stack. A novel set of YARP companion modules, which provide basic navigation functionalities for robots unable to run ROS, is also presented. These modules are optional, independent from each other, and they provide compatible functionalities to well-known packages available inside ROS framework. This paper also discusses how developers can customize their own hybrid YARP-ROS environment in the way it best suits their needs (e.g., the system can be configured to have a YARP application sending navigation commands to a ROS path planner, or vice versa. A number of available possibilities is presented through a set of chosen test cases applied to both real and simulated robots. Finally, example applications discussed in this paper are also made available to the community by providing snippets of code and links to source files hosted on github repository https://github.com/robotology.1

  2. Medical technology integration: CT, angiography, imaging-capable OR-table, navigation and robotics in a multifunctional sterile suite.

    Science.gov (United States)

    Jacob, A L; Regazzoni, P; Bilecen, D; Rasmus, M; Huegli, R W; Messmer, P

    2007-01-01

    Technology integration is an enabling technological prerequisite to achieve a major breakthrough in sophisticated intra-operative imaging, navigation and robotics in minimally invasive and/or emergency diagnosis and therapy. Without a high degree of integration and reliability comparable to that achieved in the aircraft industry image guidance in its different facets will not ultimately succeed. As of today technology integration in the field of image-guidance is close to nonexistent. Technology integration requires inter-departmental integration of human and financial resources and of medical processes in a dialectic way. This expanded techno-socio-economic integration has profound consequences for the administration and working conditions in hospitals. At the university hospital of Basel, Switzerland, a multimodality multifunction sterile suite was put into operation after a substantial pre-run. We report the lessons learned during our venture into the world of medical technology integration and describe new possibilities for similar integration projects in the future.

  3. Development of MR compatible laparoscope robot using master-slave control method

    International Nuclear Information System (INIS)

    Toyoda, Kazutaka; Jaeheon, Chung; Murata, Masaharu; Odaira, Takeshi; Hashizume, Makoto; Ieiri, Satoshi

    2011-01-01

    Recently, MRI guided robotic surgery has been studied. This surgery uses MRI, a surgical navigation system and a surgical robot system intraoperatively for realization of safer and assured surgeries. We have developed a MR compatible laparoscope robot and 4DOF master manipulator (master) independently. So, in this research we report system integration of the master and the laparoscope robot. The degrees of freedom between the master and the laparoscope robot is the same (4DOF), so that the relation of orientation between master and laparoscope robot is one to one. The network communication method between the master and the laparoscope robot is UDP connection based on TCP/IP protocol for reduction of communication delay. In future work we will do experiments of operability of master-slave laparoscope robot system. (author)

  4. ROS (Robot Operating System) für Automotive

    OpenAIRE

    Bubeck, Alexander

    2014-01-01

    - Introduction into the Robot Operating System - Open Source in the automotive industries - Application of ROS in the automotive industry - ROS navigation - ROS with real time control - ROS in the embedded world - Outlook: ROS 2.0 - Summary

  5. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  6. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  7. Fuzzy Logic Based Behavior Fusion for Navigation of an Intelligent Mobile Robot

    Institute of Scientific and Technical Information of China (English)

    李伟; 陈祖舜; 等

    1996-01-01

    This paper presents a new method for behavior fusion control of a mobile robot in uncertain environments.Using behavior fusion by fuzzy logic,a mobile robot is able to directly execute its motion according to range information about environments,acquired by ultrasonic sensors,without the need for trajectory planning.Based on low-level behavior control,an efficient strategy for integrating high-level global planning for robot motion can be formulated,since,in most applications,some information on environments is prior knowledge.A global planner,therefore,only to generate some subgoal positions rather than exact geometric paths.Because such subgoals can be easily removed from or added into the plannes,this strategy reduces computational time for global planning and is flexible for replanning in dynamic environments.Simulation results demonstrate that the proposed strategy can be applied to robot motion in complex and dynamic environments.

  8. Hydrodynamic performance of a biomimetic robotic swimmer actuated by ionic polymer–metal composite

    International Nuclear Information System (INIS)

    Shen, Qi; Wang, Tiammiao; Liang, Jianhong; Wen, Li

    2013-01-01

    In this paper, we study the thrust performance of a biomimetic robotic swimmer that uses ionic polymer–metal composite (IPMC) as a flexible actuator in viscous and inertial flow, for a comprehensive understanding of IPMC swimmers at different scales. A hydrodynamic model based on the elongated body theory was developed. Based on image analysis, the parameters of the model were identified and simulation results were obtained. To obtain the hydrodynamic thrust performance of the robotic swimmer, we implemented a novel experimental apparatus. Systematic tests were conducted in the servo towing system to measure the self-propelled speed and thrust efficiency under different actuation of IPMC. The undulatory motions of the IPMC swimmer were identified. Experimental results demonstrated that the theoretical model can accurately predict the speed and thrust efficiency of the robotic swimmer. When the Reynolds number of the robotic swimmer was reduced to approximately 0.1%, its speed and thrust efficiency were reduced by 95.22% and 87.33% respectively. It was concluded that the robotic swimmer has a low speed and thrust efficiency when it swims in a viscous flow. Generally, the thrust performance of the robotic swimmer is determined by the kinematics and Reynolds number. In addition, the optimal actuation frequency for the thrust efficiency is greater in a viscous fluid. These results may contribute to a better understanding of the swimming performance of IPMC actuated swimmers in a distinct flow regime (viscous and inertial regime). (paper)

  9. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    Science.gov (United States)

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  10. Absolute Navigation Performance of the Orion Exploration Fight Test 1

    Science.gov (United States)

    Zanetti, Renato; Holt, Greg; Gay, Robert; D'Souza, Christopher; Sud, Jastesh

    2016-01-01

    Launched in December 2014 atop a Delta IV Heavy from the Kennedy Space Center, the Orion vehicle's Exploration Flight Test-1 (EFT-1) successfully completed the objective to stress the system by placing the un-crewed vehicle on a high-energy parabolic trajectory replicating conditions similar to those that would be experienced when returning from an asteroid or a lunar mission. Unique challenges associated with designing the navigation system for EFT-1 are presented with an emphasis on how redundancy and robustness influenced the architecture. Two Inertial Measurement Units (IMUs), one GPS receiver and three barometric altimeters (BALTs) comprise the navigation sensor suite. The sensor data is multiplexed using conventional integration techniques and the state estimate is refined by the GPS pseudorange and deltarange measurements in an Extended Kalman Filter (EKF) that employs UDU factorization. The performance of the navigation system during flight is presented to substantiate the design.

  11. Stereo-Based Visual Odometry for Autonomous Robot Navigation

    Directory of Open Access Journals (Sweden)

    Ioannis Kostavelis

    2016-02-01

    Full Text Available Mobile robots should possess accurate self-localization capabilities in order to be successfully deployed in their environment. A solution to this challenge may be derived from visual odometry (VO, which is responsible for estimating the robot's pose by analysing a sequence of images. The present paper proposes an accurate, computationally-efficient VO algorithm relying solely on stereo vision images as inputs. The contribution of this work is twofold. Firstly, it suggests a non-iterative outlier detection technique capable of efficiently discarding the outliers of matched features. Secondly, it introduces a hierarchical motion estimation approach that produces refinements to the global position and orientation for each successive step. Moreover, for each subordinate module of the proposed VO algorithm, custom non-iterative solutions have been adopted. The accuracy of the proposed system has been evaluated and compared with competent VO methods along DGPS-assessed benchmark routes. Experimental results of relevance to rough terrain routes, including both simulated and real outdoors data, exhibit remarkable accuracy, with positioning errors lower than 2%.

  12. Swimming Performance of Toy Robotic Fish

    Science.gov (United States)

    Petelina, Nina; Mendelson, Leah; Techet, Alexandra

    2015-11-01

    HEXBUG AquaBotsTM are a commercially available small robot fish that come in a variety of ``species''. These models have varying caudal fin shapes and randomly-varied modes of swimming including forward locomotion, diving, and turning. In this study, we assess the repeatability and performance of the HEXBUG swimming behaviors and discuss the use of these toys to develop experimental techniques and analysis methods to study live fish swimming. In order to determine whether these simple, affordable model fish can be a valid representation for live fish movement, two models, an angelfish and a shark, were studied using 2D Particle Image Velocimetry (PIV) and 3D Synthetic Aperture PIV. In a series of experiments, the robotic fish were either allowed to swim freely or towed in one direction at a constant speed. The resultant measurements of the caudal fin wake are compared to data from previous studies of a real fish and simplified flapping propulsors.

  13. Assessment of Spatial Navigation and Docking Performance During Simulated Rover Tasks

    Science.gov (United States)

    Wood, S. J.; Dean, S. L.; De Dios, Y. E.; Moore, S. T.

    2010-01-01

    INTRODUCTION: Following long-duration exploration transits, pressurized rovers will enhance surface mobility to explore multiple sites across Mars and other planetary bodies. Multiple rovers with docking capabilities are envisioned to expand the range of exploration. However, adaptive changes in sensorimotor and cognitive function may impair the crew s ability to safely navigate and perform docking tasks shortly after transition to the new gravitoinertial environment. The primary goal of this investigation is to quantify post-flight decrements in spatial navigation and docking performance during a rover simulation. METHODS: Eight crewmembers returning from the International Space Station will be tested on a motion simulator during four pre-flight and three post-flight sessions over the first 8 days following landing. The rover simulation consists of a serial presentation of discrete tasks to be completed within a scheduled 10 min block. The tasks are based on navigating around a Martian outpost spread over a 970 sq m terrain. Each task is subdivided into three components to be performed as quickly and accurately as possible: (1) Perspective taking: Subjects use a joystick to indicate direction of target after presentation of a map detailing current orientation and location of the rover with the task to be performed. (2) Navigation: Subjects drive the rover to the desired location while avoiding obstacles. (3) Docking: Fine positioning of the rover is required to dock with another object or align a camera view. Overall operator proficiency will be based on how many tasks the crewmember can complete during the 10 min time block. EXPECTED RESULTS: Functionally relevant testing early post-flight will develop evidence regarding the limitations to early surface operations and what countermeasures are needed. This approach can be easily adapted to a wide variety of simulated vehicle designs to provide sensorimotor assessments for other operational and civilian populations.

  14. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    Science.gov (United States)

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  15. Quantifying and Maximizing Performance of a Human-Centric Robot under Precision, Safety, and Robot Specification Constraints

    Data.gov (United States)

    National Aeronautics and Space Administration — The research project is an effort towards achieving 99.99% safety of mobile robots working alongside humans while matching the precision performance of industrial...

  16. Planetary rovers robotic exploration of the solar system

    CERN Document Server

    Ellery, Alex

    2016-01-01

    The increasing adoption of terrain mobility – planetary rovers – for the investigation of planetary surfaces emphasises their central importance in space exploration. This imposes a completely new set of technologies and methodologies to the design of such spacecraft – and planetary rovers are indeed, first and foremost, spacecraft. This introduces vehicle engineering, mechatronics, robotics, artificial intelligence and associated technologies to the spacecraft engineer’s repertoire of skills. Planetary Rovers is the only book that comprehensively covers these aspects of planetary rover engineering and more. The book: • discusses relevant planetary environments to rover missions, stressing the Moon and Mars; • includes a brief survey of previous rover missions; • covers rover mobility, traction and control systems; • stresses the importance of robotic vision in rovers for both navigation and science; • comprehensively covers autonomous navigation, path planning and multi-rover formations on ...

  17. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles.

    Science.gov (United States)

    Lin, Chi-Ying; Hsu, Bing-Cheng

    2018-05-14

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.

  18. Development of performance measures based on visibility for effective placement of aids to navigation

    Science.gov (United States)

    Fang, Tae Hyun; Kim, Yeon-Gyu; Gong, In-Young; Park, Sekil; Kim, Ah-Young

    2015-09-01

    In order to develop the challenging process of placing Aids to Navigation (AtoN), we propose performance measures which quantifies the effect of such placement. The best placement of AtoNs is that from which the navigator can best recognize the information provided by an AtoN. The visibility of AtoNs depends mostly on light sources, the weather condition and the position of the navigator. Visual recognition is enabled by achieving adequate contrast between the AtoN light source and background light. Therefore, the performance measures can be formulated through the amount of differences between these two lights. For simplification, this approach is based on the values of the human factor suggested by International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA). Performance measures for AtoN placement can be evaluated through AtoN Simulator, which has been being developed by KIOST/KRISO in Korea and has been launched by Korea National Research Program. Simulations for evaluation are carried out at waterway in Busan port in Korea.

  19. Development of performance measures based on visibility for effective placement of aids to navigation

    Directory of Open Access Journals (Sweden)

    Tae Hyun Fang

    2015-05-01

    Full Text Available In order to develop the challenging process of placing Aids to Navigation (AtoN, we propose performance measures which quantifies the effect of such placement. The best placement of AtoNs is that from which the navigator can best recognize the information provided by an AtoN. The visibility of AtoNs depends mostly on light sources, the weather condition and the position of the navigator. Visual recognition is enabled by achieving adequate contrast between the AtoN light source and background light. Therefore, the performance measures can be formulated through the amount of differences between these two lights. For simplification, this approach is based on the values of the human factor suggested by International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA. Performance measures for AtoN placement can be evaluated through AtoN Simulator, which has been being developed by KIOST/KRISO in Korea and has been launched by Korea National Research Program. Simulations for evaluation are carried out at waterway in Busan port in Korea.

  20. Induced vibrations increase performance of a winged self-righting robot

    Science.gov (United States)

    Othayoth, Ratan; Xuan, Qihan; Li, Chen

    When upside down, cockroaches can open their wings to dynamically self-right. In this process, an animal often has to perform multiple unsuccessful maneuvers to eventually right, and often flails its legs. Here, we developed a cockroach-inspired winged self-righting robot capable of controlled body vibrations to test the hypothesis that vibrations assist self-righting transitions. Robot body vibrations were induced by an oscillating mass (10% of body mass) and varied by changing oscillation frequency. We discovered that, as the robot's body vibrations increased, righting probability increased, and righting time decreased (P locomotor transitions, but highlights the need for further stochastic modeling to capture the uncertain nature of when righting maneuvers result in successful righting.

  1. An autonomous mobil robot to perform waste drum inspections

    International Nuclear Information System (INIS)

    Peterson, K.D.; Ward, C.R.

    1994-01-01

    A mobile robot is being developed by the Savannah River Technology Center (SRTC) Robotics Group of Westinghouse Savannah River company (WSRC) to perform mandated inspections of waste drums stored in warehouse facilities. The system will reduce personnel exposure and create accurate, high quality documentation to ensure regulatory compliance. Development work is being coordinated among several DOE, academic and commercial entities in accordance with DOE's technology transfer initiative. The prototype system was demonstrated in November of 1993. A system is now being developed for field trails at the Fernald site

  2. Multidisciplinary approach for developing a new robotic system for domiciliary assistance to elderly people.

    Science.gov (United States)

    Cavallo, F; Aquilano, M; Bonaccorsi, M; Mannari, I; Carrozza, M C; Dario, P

    2011-01-01

    This paper aims to show the effectiveness of a (inter / multi)disciplinary team, based on the technology developers, elderly care organizations, and designers, in developing the ASTRO robotic system for domiciliary assistance to elderly people. The main issues presented in this work concern the improvement of robot's behavior by means of a smart sensor network able to share information with the robot for localization and navigation, and the design of the robot's appearance and functionalities by means of a substantial analysis of users' requirements and attitude to robotic technology to improve acceptability and usability.

  3. A Novel Path Planning for Robots Based on Rapidly-Exploring Random Tree and Particle Swarm Optimizer Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Feng

    2013-09-01

    Full Text Available A based on Rapidly-exploring Random Tree(RRT and Particle Swarm Optimizer (PSO for path planning of the robot is proposed.First the grid method is built to describe the working space of the mobile robot,then the Rapidly-exploring Random Tree algorithm is used to obtain the global navigation path,and the Particle Swarm Optimizer algorithm is adopted to get the better path.Computer experiment results demonstrate that this novel algorithm can plan an optimal path rapidly in a cluttered environment.The successful obstacle avoidance is achieved,and the model is robust and performs reliably.

  4. Mobile robot for hazardous environments

    International Nuclear Information System (INIS)

    Bains, N.

    1995-01-01

    This paper describes the architecture and potential applications of the autonomous robot for a known environment (ARK). The ARK project has developed an autonomous mobile robot that can move around by itself in a complicated nuclear environment utilizing a number of sensors for navigation. The primary sensor system is computer vision. The ARK has the intelligence to determine its position utilizing open-quotes natural landmarks,close quotes such as ordinary building features at any point along its path. It is this feature that gives ARK its uniqueness to operate in an industrial type of environment. The prime motivation to develop ARK was the potential application of mobile robots in radioactive areas within nuclear generating stations and for nuclear waste sites. The project budget is $9 million over 4 yr and will be completed in October 1995

  5. Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning

    Science.gov (United States)

    Kawewong, Aram; Honda, Yutaro; Tsuboyama, Manabu; Hasegawa, Osamu

    Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.

  6. The future of robotics in hand surgery.

    Science.gov (United States)

    Liverneaux, P; Nectoux, E; Taleb, C

    2009-10-01

    Robotics has spread over many surgical fields over the last decade: orthopaedic, cardiovascular, urologic, gynaecologic surgery and various other types of surgery. There are five different types of robots: passive, semiactive and active robots, telemanipulators and simulators. Hand surgery is at a crossroad between orthopaedic surgery, plastic surgery and microsurgery; it has to deal with fixing all sorts of tissues from bone to soft tissues. To our knowledge, there is not any paper focusing on potential clinical applications in this realm, even though robotics could be helpful for hand surgery. One must point out the numerous works on bone tissue with regard to passive robots (such as fluoroscopic navigation as an ancillary for percutaneous screwing in the scaphoid bone). Telemanipulators, especially in microsurgery, can improve surgical motion by suppressing physiological tremor thanks to movement demultiplication (experimental vascular and nervous sutures previously published). To date, the robotic technology has not yet become simple-to-use, cheap and flawless but in the future, it will probably be of great technical help, and even allow remote-controlled surgery overseas.

  7. Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    Science.gov (United States)

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2013-01-01

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604

  8. Self-organized multi-camera network for a fast and easy deployment of ubiquitous robots in unknown environments.

    Science.gov (United States)

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2012-12-27

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.

  9. Performance testing of a system for remote ultrasonic examination of the Hanford double-shell waste storage tanks

    International Nuclear Information System (INIS)

    Pfluger, D.C.; Somers, T.; Berger, A.D.

    1995-02-01

    A mobile robotic inspection system is being developed for remote ultrasonic examination of the double wall waste storage tanks at Hanford. Performance testing of the system includes demonstrating robot mobility within the tank annulus, evaluating the accuracy of the vision based navigation process, and verifying ultrasonic and video system performance. This paper briefly describes the system and presents a summary of the plan for performance testing of the ultrasonic testing system. Performance test results will be presented at the conference

  10. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  11. Robotic Needle Guide for Prostate Brachytherapy: Clinical Testing of Feasibility and Performance

    Science.gov (United States)

    Song, Danny Y; Burdette, Everette C; Fiene, Jonathan; Armour, Elwood; Kronreif, Gernot; Deguet, Anton; Zhang, Zhe; Iordachita, Iulian; Fichtinger, Gabor; Kazanzides, Peter

    2010-01-01

    Purpose Optimization of prostate brachytherapy is constrained by tissue deflection of needles and fixed spacing of template holes. We developed and clinically tested a robotic guide towards the goal of allowing greater freedom of needle placement. Methods and Materials The robot consists of a small tubular needle guide attached to a robotically controlled arm. The apparatus is mounted and calibrated to operate in the same coordinate frame as a standard template. Translation in x and y directions over the perineum ±40mm are possible. Needle insertion is performed manually. Results Five patients were treated in an IRB-approved study. Confirmatory measurements of robotic movements for initial 3 patients using infrared tracking showed mean error of 0.489 mm (SD 0.328 mm). Fine adjustments in needle positioning were possible when tissue deflection was encountered; adjustments were performed in 54/179 (30.2%) needles placed, with 36/179 (20.1%) adjustments of > 2mm. Twenty-seven insertions were intentionally altered to positions between the standard template grid to improve the dosimetric plan or avoid structures such as pubic bone and blood vessels. Conclusions Robotic needle positioning provided a means of compensating for needle deflections as well as the ability to intentionally place needles into areas between the standard template holes. To our knowledge, these results represent the first clinical testing of such a system. Future work will be incorporation of direct control of the robot by the physician, adding software algorithms to help avoid robot collisions with the ultrasound, and testing the angulation capability in the clinical setting. PMID:20729152

  12. Development of a multimode navigation system for an assistive robotics project

    OpenAIRE

    Cherubini , Andrea; Oriolo , G; Macri , F; Aloise , F; Babiloni , F; Cincotti , F; Mattia , D

    2007-01-01

    International audience; Assistive technology is an emerging area where robotic devices can be used to strengthen the residual abilities of individuals with motor disabilities or to help them achieve independence in the activities of daily living. This paper deals with a project aimed at designing a system that provides remote control of home-installed appliances, including the Sony AIBO, a commercial mobile robot. The development of the project is described by focusing on the design of the ro...

  13. Automation and robotics technology for intelligent mining systems

    Science.gov (United States)

    Welsh, Jeffrey H.

    1989-01-01

    The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.

  14. Tracked robot controllers for climbing obstacles autonomously

    Science.gov (United States)

    Vincent, Isabelle

    2009-05-01

    Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.

  15. Kinematics effectively delineate accomplished users of endovascular robotics with a physical training model.

    Science.gov (United States)

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean

    2015-02-01

    Endovascular robotics systems, now approved for clinical use in the United States and Europe, are seeing rapid growth in interest. Determining who has sufficient expertise for safe and effective clinical use remains elusive. Our aim was to analyze performance on a robotic platform to determine what defines an expert user. During three sessions, 21 subjects with a range of endovascular expertise and endovascular robotic experience (novices 20 hours) performed four tasks on a training model. All participants completed a 2-hour training session on the robot by a certified instructor. Completion times, global rating scores, and motion metrics were collected to assess performance. Electromagnetic tracking was used to capture and to analyze catheter tip motion. Motion analysis was based on derivations of speed and position including spectral arc length and total number of submovements (inversely proportional to proficiency of motion) and duration of submovements (directly proportional to proficiency). Ninety-eight percent of competent subjects successfully completed the tasks within the given time, whereas 91% of noncompetent subjects were successful. There was no significant difference in completion times between competent and noncompetent users except for the posterior branch (151 s:105 s; P = .01). The competent users had more efficient motion as evidenced by statistically significant differences in the metrics of motion analysis. Users with >20 hours of experience performed significantly better than those newer to the system, independent of prior endovascular experience. This study demonstrates that motion-based metrics can differentiate novice from trained users of flexible robotics systems for basic endovascular tasks. Efficiency of catheter movement, consistency of performance, and learning curves may help identify users who are sufficiently trained for safe clinical use of the system. This work will help identify the learning curve and specific movements that

  16. A learning-based semi-autonomous controller for robotic exploration of unknown disaster scenes while searching for victims.

    Science.gov (United States)

    Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie

    2014-12-01

    Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.

  17. Development of a Three-dimensional Surgical Navigation System with Magnetic Resonance Angiography and a Three-dimensional Printer for Robot-assisted Radical Prostatectomy.

    Science.gov (United States)

    Jomoto, Wataru; Tanooka, Masao; Doi, Hiroshi; Kikuchi, Keisuke; Mitsuie, Chiemi; Yamada, Yusuke; Suzuki, Toru; Yamano, Toshiko; Ishikura, Reiichi; Kotoura, Noriko; Yamamoto, Shingo

    2018-01-02

    We sought to develop a surgical navigation system using magnetic resonance angiography (MRA) and a three-dimensional (3D) printer for robot-assisted radical prostatectomy (RARP). Six patients with pathologically proven localized prostate cancer were prospectively enrolled in this study. Prostate magnetic resonance imaging (MRI), consisting of T2-weighted sampling perfection with application-optimized contrasts using different flip-angle evolutions (SPACE) and true fast imaging with steady-state precession (true FISP), reconstructed by volume rendering, was followed by dynamic contrast-enhanced MRA performed with a volumetric interpolated breath-hold examination (VIBE) during intravenous bolus injection of gadobutrol. Images of arterial and venous phases were acquired over approximately 210 seconds. Selected images were sent to a workstation for generation of 3D volume-rendered images and standard triangulated language (STL) files for 3D print construction. The neurovascular bundles (NVBs) were found in sequence on non-contrast images. Accessory pudendal arteries (APAs) were found in all cases in the arterial phase of contrast enhancement but were ill-defined on non-contrast enhanced MRA. Dynamic contrast-enhanced MRA helped to detect APAs, suggesting that this 3D system using MRI will be useful in RARP.

  18. The effect of egocentric body movements on users' navigation performance and spatial memory in zoomable user interfaces

    OpenAIRE

    Rädle, Roman; Jetter, Hans-Christian; Butscher, Simon; Reiterer, Harald

    2013-01-01

    We present two experiments examining the impact of navigation techniques on users’ navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users’ spatial memory immediately after a navigation tas...

  19. Toward understanding social cues and signals in human–robot interaction: effects of robot gaze and proxemic behavior

    Science.gov (United States)

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot AvaTM mobile robotics platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals. PMID:24348434

  20. Towards understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior

    Directory of Open Access Journals (Sweden)

    Stephen M. Fiore

    2013-11-01

    Full Text Available As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI. We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  1. Evaluation by Expert Dancers of a Robot That Performs Partnered Stepping via Haptic Interaction.

    Directory of Open Access Journals (Sweden)

    Tiffany L Chen

    Full Text Available Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson's disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10 performed a forward/backward walking step to a recorded drum beat while holding the robot's end effectors. We varied the admittance gain of the robot's mobile base controller and the stiffness of the robot's arms. The robot followed the participants with low lag (M=224, SD=194 ms across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM distance, and interaction forces correlated with the expert dancers' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach's α=0.92. In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions.

  2. Evaluation by Expert Dancers of a Robot That Performs Partnered Stepping via Haptic Interaction

    Science.gov (United States)

    Chen, Tiffany L.; Bhattacharjee, Tapomayukh; McKay, J. Lucas; Borinski, Jacquelyn E.; Hackney, Madeleine E.; Ting, Lena H.; Kemp, Charles C.

    2015-01-01

    Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson's disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10) performed a forward/backward walking step to a recorded drum beat while holding the robot's end effectors. We varied the admittance gain of the robot's mobile base controller and the stiffness of the robot's arms. The robot followed the participants with low lag (M=224, SD=194 ms) across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM) distance, and interaction forces correlated with the expert dancers' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach's α=0.92). In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions. PMID:25993099

  3. Improved Line Tracking System for Autonomous Navigation of High-Speed Vehicle

    Directory of Open Access Journals (Sweden)

    Yahya Zare Khafri

    2012-07-01

    Full Text Available Line tracking navigation is one of the most widely techniques used in the robot navigation. In this paper, a customized line tracking system is proposed for autonomous navigation of high speed vehicles. In the presented system, auxiliary information -in addition to the road path- is added to the tracking lines such as locations of turn and intersections in the real roads. Moreover, the geometric position of line sensors is re-designed enables the high rate sensing with higher reliability. Finally, a light-weight navigation algorithm is proposed allow the high-speed movement using a reasonable processing power. This system is implemented on a MIPS-based embedded processor and experimental results with this embedded system show more than 98% accuracy at 200km/h with a 1GHz processor is viable.

  4. Using a cognitive architecture for general purpose service robot control

    Science.gov (United States)

    Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo

    2015-04-01

    A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.

  5. Deep space telecommunications, navigation, and information management. Support of the space exploration initiative

    Science.gov (United States)

    Hall, Justin R.; Hastrup, Rolf C.

    The United States Space Exploration Initiative (SEI) calls for the charting of a new and evolving manned course to the Moon, Mars, and beyond. This paper discusses key challenges in providing effective deep space telecommunications, navigation, and information management (TNIM) architectures and designs for Mars exploration support. The fundamental objectives are to provide the mission with means to monitor and control mission elements, acquire engineering, science, and navigation data, compute state vectors and navigate, and move these data efficiently and automatically between mission nodes for timely analysis and decision-making. Although these objectives do not depart, fundamentally, from those evolved over the past 30 years in supporting deep space robotic exploration, there are several new issues. This paper focuses on summarizing new requirements, identifying related issues and challenges, responding with concepts and strategies which are enabling, and, finally, describing candidate architectures, and driving technologies. The design challenges include the attainment of: 1) manageable interfaces in a large distributed system, 2) highly unattended operations for in-situ Mars telecommunications and navigation functions, 3) robust connectivity for manned and robotic links, 4) information management for efficient and reliable interchange of data between mission nodes, and 5) an adequate Mars-Earth data rate.

  6. Cruise and turning performance of an improved fish robot actuated by piezoceramic actuators

    Science.gov (United States)

    Nguyen, Quang Sang; Heo, Seok; Park, Hoon Cheol; Goo, Nam Seo; Byun, Doyoung

    2009-03-01

    The purpose of this study is improvement of a fish robot actuated by four light-weight piezocomposite actuators (LIPCAs). In the fish robot, we developed a new actuation mechanism working without any gear and thus the actuation mechanism was simple in fabrication. By using the new actuation mechanism, cross section of the fish robot became 30% smaller than that of the previous model. Performance tests of the fish robot in water were carried out to measure tail-beat angle, thrust force, swimming speed and turning radius for tail-beat frequencies from 1Hz to 5Hz. The maximum swimming speed of the fish robot was 7.7 cm/s at 3.9Hz tail-beat frequency. Turning experiment showed that swimming direction of the fish robot could be controlled with 0.41 m turning radius by controlling tail-beat angle.

  7. Consistency of performance of robot-assisted surgical tasks in virtual reality.

    Science.gov (United States)

    Suh, I H; Siu, K-C; Mukherjee, M; Monk, E; Oleynikov, D; Stergiou, N

    2009-01-01

    The purpose of this study was to investigate consistency of performance of robot-assisted surgical tasks in a virtual reality environment. Eight subjects performed two surgical tasks, bimanual carrying and needle passing, with both the da Vinci surgical robot and a virtual reality equivalent environment. Nonlinear analysis was utilized to evaluate consistency of performance by calculating the regularity and the amount of divergence in the movement trajectories of the surgical instrument tips. Our results revealed that movement patterns for both training tasks were statistically similar between the two environments. Consistency of performance as measured by nonlinear analysis could be an appropriate methodology to evaluate the complexity of the training tasks between actual and virtual environments and assist in developing better surgical training programs.

  8. Development of wall ranging radiation inspection robot

    International Nuclear Information System (INIS)

    Lee, B. J.; Yoon, J. S.; Park, Y. S.; Hong, D. H.; Oh, S. C.; Jung, J. H.; Chae, K. S.

    1999-03-01

    With the aging of nation's nuclear facilities, the target of this project is to develop an under water wall ranging robotic vehicle which inspects the contamination level of the research reactor (TRIGA MARK III) as a preliminary process to dismantling. The developed vehicle is driven by five thrusters and consists of small sized control boards, and absolute position detector, and a radiation detector. Also, the algorithm for autonomous navigation is developed and its performance is tested through under water experiments. Also, the test result at the research reactor shows that the vehicle firmly attached the wall while measuring the contamination level of the wall

  9. Development of wall ranging radiation inspection robot

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B. J.; Yoon, J. S.; Park, Y. S.; Hong, D. H.; Oh, S. C.; Jung, J. H.; Chae, K. S

    1999-03-01

    With the aging of nation's nuclear facilities, the target of this project is to develop an under water wall ranging robotic vehicle which inspects the contamination level of the research reactor (TRIGA MARK III) as a preliminary process to dismantling. The developed vehicle is driven by five thrusters and consists of small sized control boards, and absolute position detector, and a radiation detector. Also, the algorithm for autonomous navigation is developed and its performance is tested through under water experiments. Also, the test result at the research reactor shows that the vehicle firmly attached the wall while measuring the contamination level of the wall.

  10. High level functions for the intuitive use of an assistive robot.

    Science.gov (United States)

    Lebec, Olivier; Ben Ghezala, Mohamed Walid; Leynart, Violaine; Laffont, Isabelle; Fattal, Charles; Devilliers, Laurence; Chastagnol, Clement; Martin, Jean-Claude; Mezouar, Youcef; Korrapatti, Hermanth; Dupourqué, Vincent; Leroux, Christophe

    2013-06-01

    This document presents the research project ARMEN (Assistive Robotics to Maintain Elderly People in a Natural environment), aimed at the development of a user friendly robot with advanced functions for assistance to elderly or disabled persons at home. Focus is given to the robot SAM (Smart Autonomous Majordomo) and its new features of navigation, manipulation, object recognition, and knowledge representation developed for the intuitive supervision of the robot. The results of the technical evaluations show the value and potential of these functions for practical applications. The paper also documents the details of the clinical evaluations carried out with elderly and disabled persons in a therapeutic setting to validate the project.

  11. Deep space telecommunications, navigation, and information management - Support of the Space Exploration Initiative

    Science.gov (United States)

    Hall, Justin R.; Hastrup, Rolf C.

    1990-10-01

    The principal challenges in providing effective deep space navigation, telecommunications, and information management architectures and designs for Mars exploration support are presented. The fundamental objectives are to provide the mission with the means to monitor and control mission elements, obtain science, navigation, and engineering data, compute state vectors and navigate, and to move these data efficiently and automatically between mission nodes for timely analysis and decision making. New requirements are summarized, and related issues and challenges including the robust connectivity for manned and robotic links, are identified. Enabling strategies are discussed, and candidate architectures and driving technologies are described.

  12. Event-Based Control Strategy for Mobile Robots in Wireless Environments.

    Science.gov (United States)

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-12-02

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.

  13. Robotics in Arthroplasty: A Comprehensive Review.

    Science.gov (United States)

    Jacofsky, David J; Allen, Mark

    2016-10-01

    Robotic-assisted orthopedic surgery has been available clinically in some form for over 2 decades, claiming to improve total joint arthroplasty by enhancing the surgeon's ability to reproduce alignment and therefore better restore normal kinematics. Various current systems include a robotic arm, robotic-guided cutting jigs, and robotic milling systems with a diversity of different navigation strategies using active, semiactive, or passive control systems. Semiactive systems have become dominant, providing a haptic window through which the surgeon is able to consistently prepare an arthroplasty based on preoperative planning. A review of previous designs and clinical studies demonstrate that these robotic systems decrease variability and increase precision, primarily focusing on component positioning and alignment. Some early clinical results indicate decreased revision rates and improved patient satisfaction with robotic-assisted arthroplasty. The future design objectives include precise planning and even further improved consistent intraoperative execution. Despite this cautious optimism, many still wonder whether robotics will ultimately increase cost and operative time without objectively improving outcomes. Over the long term, every industry that has seen robotic technology be introduced, ultimately has shown an increase in production capacity, improved accuracy and precision, and lower cost. A new generation of robotic systems is now being introduced into the arthroplasty arena, and early results with unicompartmental knee arthroplasty and total hip arthroplasty have demonstrated improved accuracy of placement, improved satisfaction, and reduced complications. Further studies are needed to confirm the cost effectiveness of these technologies. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Physics-based approach to chemical source localization using mobile robotic swarms

    Science.gov (United States)

    Zarzhitsky, Dimitri

    2008-07-01

    Recently, distributed computation has assumed a dominant role in the fields of artificial intelligence and robotics. To improve system performance, engineers are combining multiple cooperating robots into cohesive collectives called swarms. This thesis illustrates the application of basic principles of physicomimetics, or physics-based design, to swarm robotic systems. Such principles include decentralized control, short-range sensing and low power consumption. We show how the application of these principles to robotic swarms results in highly scalable, robust, and adaptive multi-robot systems. The emergence of these valuable properties can be predicted with the help of well-developed theoretical methods. In this research effort, we have designed and constructed a distributed physicomimetics system for locating sources of airborne chemical plumes. This task, called chemical plume tracing (CPT), is receiving a great deal of attention due to persistent homeland security threats. For this thesis, we have created a novel CPT algorithm called fluxotaxis that is based on theoretical principles of fluid dynamics. Analytically, we show that fluxotaxis combines the essence, as well as the strengths, of the two most popular biologically-inspired CPT methods-- chemotaxis and anemotaxis. The chemotaxis strategy consists of navigating in the direction of the chemical density gradient within the plume, while the anemotaxis approach is based on an upwind traversal of the chemical cloud. Rigorous and extensive experimental evaluations have been performed in simulated chemical plume environments. Using a suite of performance metrics that capture the salient aspects of swarm-specific behavior, we have been able to evaluate and compare the three CPT algorithms. We demonstrate the improved performance of our fluxotaxis approach over both chemotaxis and anemotaxis in these realistic simulation environments, which include obstacles. To test our understanding of CPT on actual hardware

  15. Swimming performance of a biomimetic compliant fish-like robot

    Science.gov (United States)

    Epps, Brenden P.; Valdivia Y Alvarado, Pablo; Youcef-Toumi, Kamal; Techet, Alexandra H.

    2009-12-01

    Digital particle image velocimetry and fluorescent dye visualization are used to characterize the performance of fish-like swimming robots. During nominal swimming, these robots produce a ‘V’-shaped double wake, with two reverse-Kármán streets in the far wake. The Reynolds number based on swimming speed and body length is approximately 7500, and the Strouhal number based on flapping frequency, flapping amplitude, and swimming speed is 0.86. It is found that swimming speed scales with the strength and geometry of a composite wake, which is constructed by freezing each vortex at the location of its centroid at the time of shedding. Specifically, we find that swimming speed scales linearly with vortex circulation. Also, swimming speed scales linearly with flapping frequency and the width of the composite wake. The thrust produced by the swimming robot is estimated using a simple vortex dynamics model, and we find satisfactory agreement between this estimate and measurements made during static load tests.

  16. Performance Analysis of a Neuro-PID Controller Applied to a Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Saeed Pezeshki

    2012-11-01

    Full Text Available The performance of robot manipulators with nonadaptive controllers might degrade significantly due to the open loop unstable system and the effect of some uncertainties on the robot model or environment. A novel Neural Network PID controller (NNP is proposed in order to improve the system performance and its robustness. The Neural Network (NN technique is applied to compensate for the effect of the uncertainties of the robot model. With the NN compensator introduced, the system errors and the NN weights with large dispersion are guaranteed to be bounded in the Lyapunov sense. The weights of the NN compensator are adaptively tuned. The simulation results show the effectiveness of the model validation approach and its efficiency to guarantee a stable and accurate trajectory tracking process in the presence of uncertainties.

  17. Supervised Autonomy for Exploration and Mobile Manipulation in Rough Terrain with a Centaur-like Robot

    Directory of Open Access Journals (Sweden)

    Max Schwarz

    2016-10-01

    Full Text Available Planetary exploration scenarios illustrate the need for autonomous robots that are capable to operate in unknown environments without direct human interaction. At the DARPA Robotics Challenge, we demonstrated that our Centaur-like mobile manipulation robot Momaro can solve complex tasks when teleoperated. Motivated by the DLR SpaceBot Cup 2015, where robots should explore a Mars-like environment, find and transport objects, take a soil sample, and perform assembly tasks, we developed autonomous capabilities for Momaro. Our robot perceives and maps previously unknown, uneven terrain using a 3D laser scanner. Based on the generated height map, we assess drivability, plan navigation paths, and execute them using the omnidirectional drive. Using its four legs, the robot adapts to the slope of the terrain. Momaro perceives objects with cameras, estimates their pose, and manipulates them with its two arms autonomously. For specifying missions, monitoring mission progress, on-the-fly reconfiguration, and teleoperation, we developed a ground station with suitable operator interfaces. To handle network communication interruptions and latencies between robot and ground station, we implemented a robust network layer for the ROS middleware. With the developed system, our team NimbRo Explorer solved all tasks of the DLR SpaceBot Camp 2015. We also discuss the lessons learned from this demonstration.

  18. Modeling and evaluation of hand-eye coordination of surgical robotic system on task performance.

    Science.gov (United States)

    Gao, Yuanqian; Wang, Shuxin; Li, Jianmin; Li, Aimin; Liu, Hongbin; Xing, Yuan

    2017-12-01

    Robotic-assisted minimally invasive surgery changes the direct hand and eye coordination in traditional surgery to indirect instrument and camera coordination, which affects the ergonomics, operation performance, and safety. A camera, two instruments, and a target, as the descriptors, are used to construct the workspace correspondence and geometrical relationships in a surgical operation. A parametric model with a set of parameters is proposed to describe the hand-eye coordination of the surgical robot. From the results, optimal values and acceptable ranges of these parameters are identified from two tasks. A 90° viewing angle had the longest completion time; 60° instrument elevation angle and 0° deflection angle had better performance; there is no significant difference among manipulation angles and observing distances on task performance. This hand-eye coordination model provides evidence for robotic design, surgeon training, and robotic initialization to achieve dexterous and safe manipulation in surgery. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Robot for Investigations and Assessments of Nuclear Areas

    Energy Technology Data Exchange (ETDEWEB)

    Kanaan, Daniel; Dogny, Stephane [AREVA D and S/DT, 30206 Bagnols sur Ceze (France)

    2015-07-01

    RIANA is a remote controlled Robot dedicated for Investigations and Assessments of Nuclear Areas. The development of RIANA is motivated by the need to have at disposal a proven robot, tested in hot cells; a robot capable of remotely investigate and characterise the inside of nuclear facilities in order to collect efficiently all the required data in the shortest possible time. It is based on a wireless medium sized remote carrier that may carry a wide variety of interchangeable modules, sensors and tools. It is easily customised to match specific requirements and quickly configured depending on the mission and the operator's preferences. RIANA integrates localisation and navigation systems. The robot will be able to generate / update a 2D map of its surrounding and exploring areas. The position of the robot is given accurately on the map. Furthermore, the robot will be able to autonomously calculate, define and follow a trajectory between 2 points taking into account its environment and obstacles. The robot is configurable to manage obstacles and restrict access to forbidden areas. RIANA allows an advanced control of modules, sensors and tools; all collected data (radiological and measured data) are displayed in real time in different format (chart, on the generated map...) and stored in a single place so that may be exported in a convenient format for data processing. This modular design gives RIANA the flexibility to perform multiple investigation missions where humans cannot work such as: visual inspections, dynamic localization and 2D mapping, characterizations and nuclear measurements of floor and walls, non destructive testing, samples collection: solid and liquid. The benefits of using RIANA are: - reducing the personnel exposures by limiting the manual intervention time, - minimizing the time and reducing the cost of investigation operations, - providing critical inputs to set up and optimize cleanup and dismantling operations. (authors)

  20. Robot for Investigations and Assessments of Nuclear Areas

    International Nuclear Information System (INIS)

    Kanaan, Daniel; Dogny, Stephane

    2015-01-01

    RIANA is a remote controlled Robot dedicated for Investigations and Assessments of Nuclear Areas. The development of RIANA is motivated by the need to have at disposal a proven robot, tested in hot cells; a robot capable of remotely investigate and characterise the inside of nuclear facilities in order to collect efficiently all the required data in the shortest possible time. It is based on a wireless medium sized remote carrier that may carry a wide variety of interchangeable modules, sensors and tools. It is easily customised to match specific requirements and quickly configured depending on the mission and the operator's preferences. RIANA integrates localisation and navigation systems. The robot will be able to generate / update a 2D map of its surrounding and exploring areas. The position of the robot is given accurately on the map. Furthermore, the robot will be able to autonomously calculate, define and follow a trajectory between 2 points taking into account its environment and obstacles. The robot is configurable to manage obstacles and restrict access to forbidden areas. RIANA allows an advanced control of modules, sensors and tools; all collected data (radiological and measured data) are displayed in real time in different format (chart, on the generated map...) and stored in a single place so that may be exported in a convenient format for data processing. This modular design gives RIANA the flexibility to perform multiple investigation missions where humans cannot work such as: visual inspections, dynamic localization and 2D mapping, characterizations and nuclear measurements of floor and walls, non destructive testing, samples collection: solid and liquid. The benefits of using RIANA are: - reducing the personnel exposures by limiting the manual intervention time, - minimizing the time and reducing the cost of investigation operations, - providing critical inputs to set up and optimize cleanup and dismantling operations. (authors)

  1. Overcoming urban GPS navigation challenges through the use of MEMS inertial sensors and proper verification of navigation system performance

    Science.gov (United States)

    Vinande, Eric T.

    This research proposes several means to overcome challenges in the urban environment to ground vehicle global positioning system (GPS) receiver navigation performance through the integration of external sensor information. The effects of narrowband radio frequency interference and signal attenuation, both common in the urban environment, are examined with respect to receiver signal tracking processes. Low-cost microelectromechanical systems (MEMS) inertial sensors, suitable for the consumer market, are the focus of receiver augmentation as they provide an independent measure of motion and are independent of vehicle systems. A method for estimating the mounting angles of an inertial sensor cluster utilizing typical urban driving maneuvers is developed and is able to provide angular measurements within two degrees of truth. The integration of GPS and MEMS inertial sensors is developed utilizing a full state navigation filter. Appropriate statistical methods are developed to evaluate the urban environment navigation improvement due to the addition of MEMS inertial sensors. A receiver evaluation metric that combines accuracy, availability, and maximum error measurements is presented and evaluated over several drive tests. Following a description of proper drive test techniques, record and playback systems are evaluated as the optimal way of testing multiple receivers and/or integrated navigation systems in the urban environment as they simplify vehicle testing requirements.

  2. Pedicle Screw Insertion Accuracy Using O-Arm, Robotic Guidance, or Freehand Technique: A Comparative Study.

    Science.gov (United States)

    Laudato, Pietro Aniello; Pierzchala, Katarzyna; Schizas, Constantin

    2018-03-15

    A retrospective radiological study. The aim of this study was to evaluate the accuracy of pedicle screw insertion using O-Arm navigation, robotic assistance, or a freehand fluoroscopic technique. Pedicle screw insertion using either "O-Arm" navigation or robotic devices is gaining popularity. Although several studies are available evaluating each of those techniques separately, no direct comparison has been attempted. Eighty-four patients undergoing implantation of 569 lumbar and thoracic screws were divided into three groups. Eleven patients (64 screws) had screws inserted using robotic assistance, 25 patients (191 screws) using the O-arm, while 48 patients (314 screws) had screws inserted using lateral fluoroscopy in a freehand technique. A single experienced spine surgeon assisted by a spinal fellow performed all procedures. Screw placement accuracy was assessed by two independent observers on postoperative computed tomography (CTs) according to the A to D Rampersaud criteria. No statistically significant difference was noted between the three groups. About 70.4% of screws in the freehand group, 69.6% in the O arm group, and 78.8% in the robotic group were placed completely within the pedicle margins (grade A) (P > 0.05). About 6.4% of screws were considered misplaced (grades C&D) in the freehand group, 4.2% in the O-arm group, and 4.7% in the robotic group (P > 0.05). The spinal fellow inserted screws with the same accuracy as the senior surgeon (P > 0.05). The advent of new technologies does not appear to alter accuracy of screw placement in our setting. Under supervision, spinal fellows might perform equally well to experienced surgeons using new tools. The lack of difference in accuracy does not imply that the above-mentioned techniques have no added advantages. Other issues, such as surgeon/patient radiation, fiddle factor, teaching suitability, etc., outside the scope of our present study, need further assessment. 3.

  3. Accurate multi-robot targeting for keyhole neurosurgery based on external sensor monitoring.

    Science.gov (United States)

    Comparetti, Mirko Daniele; Vaccarella, Alberto; Dyagilev, Ilya; Shoham, Moshe; Ferrigno, Giancarlo; De Momi, Elena

    2012-05-01

    Robotics has recently been introduced in surgery to improve intervention accuracy, to reduce invasiveness and to allow new surgical procedures. In this framework, the ROBOCAST system is an optically surveyed multi-robot chain aimed at enhancing the accuracy of surgical probe insertion during keyhole neurosurgery procedures. The system encompasses three robots, connected as a multiple kinematic chain (serial and parallel), totalling 13 degrees of freedom, and it is used to automatically align the probe onto a desired planned trajectory. The probe is then inserted in the brain, towards the planned target, by means of a haptic interface. This paper presents a new iterative targeting approach to be used in surgical robotic navigation, where the multi-robot chain is used to align the surgical probe to the planned pose, and an external sensor is used to decrease the alignment errors. The iterative targeting was tested in an operating room environment using a skull phantom, and the targets were selected on magnetic resonance images. The proposed targeting procedure allows about 0.3 mm to be obtained as the residual median Euclidean distance between the planned and the desired targets, thus satisfying the surgical accuracy requirements (1 mm), due to the resolution of the diffused medical images. The performances proved to be independent of the robot optical sensor calibration accuracy.

  4. The Effect of Terrain Inclination on Performance and the Stability Region of Two-Wheeled Mobile Robots

    Directory of Open Access Journals (Sweden)

    Zareena Kausar

    2012-11-01

    Full Text Available Two-wheeled mobile robots (TWMRs have a capability of avoiding the tip-over problem on inclined terrain by adjusting the centre of mass position of the robot body. The effects of terrain inclination on the robot performance are studied to exploit this capability. Prior to the real-time implementation of position control, an estimation of the stability region of the TWMR is essential for safe operation. A numerical method to estimate the stability region is applied and the effects of inclined surfaces on the performance and stability region of the robot are investigated. The dynamics of a TWMR is modelled on a general uneven terrain and reduced for cases of inclined and horizontal flat terrain. A full state feedback (FSFB controller is designed based on optimal gains with speed tracking on a horizontal flat terrain. The performance and stability regions are simulated for the robot on a horizontal flat and inclined terrain with the same controller. The results endorse a variation in equilibrium points and a reduction in stability region for robot motion on inclined terrain.

  5. Smart swarms of bacteria-inspired agents with performance adaptable interactions.

    Directory of Open Access Journals (Sweden)

    Adi Shklarsh

    2011-09-01

    Full Text Available Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment--by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots.

  6. Smart swarms of bacteria-inspired agents with performance adaptable interactions.

    Science.gov (United States)

    Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel

    2011-09-01

    Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment--by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots.

  7. Trajectory generation for two robots cooperating to perform a task

    International Nuclear Information System (INIS)

    Lewis, C.L.

    1995-01-01

    This paper formulates an algorithm for trajectory generation for two robots cooperating to perform an assembly task. Treating the two robots as a single redundant system, this paper derives two Jacobian matrices which relate the joint rates of the entire system to the relative motion of the grippers with respect to one another. The advantage of this formulation over existing methods is that a variety of secondary criteria can be conveniently satisfied using motion in the null-space of the relative Jacobian. This paper presents methods for generating dual-arm joint trajectories which perform assembly tasks while at the same time avoiding obstacles and joint limits, and also maintaining constraints on the absolute position and orientation of the end-effectors

  8. Improving mobile robot localization: grid-based approach

    Science.gov (United States)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  9. Motion Detection from Mobile Robots with Fuzzy Threshold Selection in Consecutive 2D Laser Scans

    Directory of Open Access Journals (Sweden)

    María A. Martínez

    2015-01-01

    Full Text Available Motion detection and tracking is a relevant problem for mobile robots during navigation to avoid collisions in dynamic environments or in applications where service robots interact with humans. This paper presents a simple method to distinguish mobile obstacles from the environment that is based on applying fuzzy threshold selection to consecutive two-dimensional (2D laser scans previously matched with robot odometry. The proposed method has been tested with the Auriga-α mobile robot in indoors to estimate the motion of nearby pedestrians.

  10. Supporting robotics technology requirements through research in intelligent machines

    Energy Technology Data Exchange (ETDEWEB)

    Mann, R.C.

    1995-02-01

    {open_quotes}Safer, better, cheaper{close_quotes} are recurring themes in many robot development efforts. Significant improvements are being accomplished with existing technology, but basic research sets the foundations for future improvements and breakthrough discoveries. Advanced robots represent systems that integrate the three basic functions of sensing, reasoning, and acting (locomotion and manipulation) into one functional unit. Depending on the application requirements, some of these functions are implemented at a more or less advanced level than others. For example, some navigation tasks can be accomplished with purely reactive control and do not require sophisticated reasoning and planning methodologies. Robotics work at the Oak Ridge National Laboratory (ORNL) spans the spectrum from basic research to application-specific development and rapid prototyping of systems. This presentation summarizes recent highlights of the robotics research activities at ORNL.

  11. The development of advanced robotics for the nuclear industry -The development of advanced robotic technology-

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Min; Lee, Yong Bum; Park, Soon Yong; Cho, Jae Wan; Lee, Nam Hoh; Kim, Woong Kee; Moon, Byung Soo; Kim, Seung Hoh; Kim, Chang Heui; Kim, Byung Soo; Hwang, Suk Yong; Lee, Yung Kwang; Moon, Je Sun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Main activity in this year is to develop both remote handling system and telepresence techniques, which can be used for people involved in extremely hazardous working area to alleviate their burden. In the robot vision technology part, KAERI-PSM system, stereo imaging camera module, stereo BOOM/MOLLY unit, and stereo HMD unit are developed. Also, autostereo TV system which falls under the category of next generation stereo imaging technology has been studied. The performance of KAERI-PSM system for remote handling task is evaluated and compared with other stereo imaging systems as well as general TV imaging system. The result shows that KAERI-PSM system is superior to the other stereo imaging systems about remote operation speedup and accuracy. The automatic recognition algorithm of instrument panel is studied and passive visual target tracking system is developed. The 5 DOF camera serving unit has been designed and fabricated. It is designed to function like human`s eye. In the sensing and intelligent control research part, thermal image database system for thermal image analysis is developed and remote temperature monitoring technique using fiber optics is investigated. And also, two dimensional radioactivity sensor head for radiation profile monitoring system is designed. In the part of intelligent robotics, mobile robot is fabricated and its autonomous navigation using fuzzy control logic is studied. These remote handling and telepresence techniques developed in this project can be applied to nozzle-dam installation/removal robot system, reactor inspection unit, underwater nuclear pellet inspection and pipe abnormality inspection. And these developed remote handling and telepresence techniques will be applied in general industry, medical science, and military as well as nuclear facilities. 203 figs, 12 tabs, 72 refs. (Author).

  12. Deep ART Neural Model for Biologically Inspired Episodic Memory and Its Application to Task Performance of Robots.

    Science.gov (United States)

    Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan

    2017-06-26

    Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.

  13. The Study of Fractional Order Controller with SLAM in the Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Shuhuan Wen

    2014-01-01

    Full Text Available We present a fractional order PI controller (FOPI with SLAM method, and the proposed method is used in the simulation of navigation of NAO humanoid robot from Aldebaran. We can discretize the transfer function by the Al-Alaoui generating function and then get the FOPI controller by Power Series Expansion (PSE. FOPI can be used as a correction part to reduce the accumulated error of SLAM. In the FOPI controller, the parameters (Kp,Ki,  and  α need to be tuned to obtain the best performance. Finally, we compare the results of position without controller and with PI controller, FOPI controller. The simulations show that the FOPI controller can reduce the error between the real position and estimated position. The proposed method is efficient and reliable for NAO navigation.

  14. Modeling and identification for high-performance robot control : an RRR-robotic arm case study

    NARCIS (Netherlands)

    Kostic, D.; Jager, de A.G.; Steinbuch, M.; Hensen, R.H.A.

    2004-01-01

    We explain a procedure for getting models of robot kinematics and dynamics that are appropriate for robot control design. The procedure consists of the following steps: (i) derivation of robot kinematic and dynamic models and establishing correctness of their structures; (ii) experimental estimation

  15. Autonomous mobile robot for radiologic surveys

    International Nuclear Information System (INIS)

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1994-01-01

    An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures

  16. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    Science.gov (United States)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  17. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  18. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  19. A Coordinated Control Architecture for Disaster Response Robots

    Science.gov (United States)

    2016-01-01

    to use these same algorithms to provide navigation Odometry for the vehicle motions when the robot is driving. Visual Odometry The YouTube link... depressed the accelerator pedal. We relied on the fact that the vehicle quickly comes to rest when the accelerator pedal is not being pressed. The

  20. Global navigation satellite systems performance analysis and augmentation strategies in aviation

    Science.gov (United States)

    Sabatini, Roberto; Moore, Terry; Ramasamy, Subramanian

    2017-11-01

    In an era of significant air traffic expansion characterized by a rising congestion of the radiofrequency spectrum and a widespread introduction of Unmanned Aircraft Systems (UAS), Global Navigation Satellite Systems (GNSS) are being exposed to a variety of threats including signal interferences, adverse propagation effects and challenging platform-satellite relative dynamics. Thus, there is a need to characterize GNSS signal degradations and assess the effects of interfering sources on the performance of avionics GNSS receivers and augmentation systems used for an increasing number of mission-essential and safety-critical aviation tasks (e.g., experimental flight testing, flight inspection/certification of ground-based radio navigation aids, wide area navigation and precision approach). GNSS signal deteriorations typically occur due to antenna obscuration caused by natural and man-made obstructions present in the environment (e.g., elevated terrain and tall buildings when flying at low altitude) or by the aircraft itself during manoeuvring (e.g., aircraft wings and empennage masking the on-board GNSS antenna), ionospheric scintillation, Doppler shift, multipath, jamming and spurious satellite transmissions. Anyone of these phenomena can result in partial to total loss of tracking and possible tracking errors, depending on the severity of the effect and the receiver characteristics. After designing GNSS performance threats, the various augmentation strategies adopted in the Communication, Navigation, Surveillance/Air Traffic Management and Avionics (CNS + A) context are addressed in detail. GNSS augmentation can take many forms but all strategies share the same fundamental principle of providing supplementary information whose objective is improving the performance and/or trustworthiness of the system. Hence it is of paramount importance to consider the synergies offered by different augmentation strategies including Space Based Augmentation System (SBAS), Ground

  1. Designing a social and assistive robot for seniors.

    Science.gov (United States)

    Eftring, H; Frennert, S

    2016-06-01

    The development of social assistive robots is an approach with the intention of preventing and detecting falls among seniors. There is a need for a relatively low-cost mobile robot with an arm and a gripper which is small enough to navigate through private homes. User requirements of a social assistive robot were collected using workshops, a questionnaire and interviews. Two prototype versions of a robot were designed, developed and tested by senior citizens (n = 49) in laboratory trials for 2 h each and in the private homes of elderly persons (n = 18) for 3 weeks each. The user requirement analysis resulted in a specification of tasks the robot should be able to do to prevent and detect falls. It was a challenge but possible to design and develop a robot where both the senior and the robot arm could reach the necessary interaction points of the robot. The seniors experienced the robot as happy and friendly. They wanted the robot to be narrower so it could pass through narrow passages in the home and they also wanted it to be able to pass over thresholds without using ramps and to drive over carpets. User trials in seniors' homes are very important to acquire relevant knowledge for developing robots that can handle real life situations in the domestic environment. Very high reliability of a robot is needed to get feedback about how seniors experience the overall behavior of the robot and to find out if the robot could reduce falls and improve the feeling of security for seniors living alone.

  2. 3D straight-stick laparoscopy versus 3D robotics for task performance in novice surgeons: a randomised crossover trial.

    Science.gov (United States)

    Shakir, Fevzi; Jan, Haider; Kent, Andrew

    2016-12-01

    The advent of three-dimensional passive stereoscopic imaging has led to the development of 3D laparoscopy. In simulation tasks, a reduction in error rate and performance time is seen with 3D compared to two-dimensional (2D) laparoscopy with both novice and expert surgeons. Robotics utilises 3D and instrument articulation through a console interface. Robotic trials have demonstrated that tasks performed in 3D produced fewer errors and quicker performance times compared with those in 2D. It was therefore perceived that the main advantage of robotic surgery was in fact 3D. Our aim was to compare 3D straight-stick laparoscopic task performance (3D) with robotic 3D (Robot), to determine whether robotic surgery confers additional benefit over and above 3D visualisation. We randomised 20 novice surgeons to perform four validated surgical tasks, either with straight-stick 3D laparoscopy followed by 3D robotic surgery or in the reverse order. The trial was conducted in two fully functional operating theatres. The primary outcome of the study was the error rate as defined for each task, and the secondary outcome was the time taken to complete each task. The participants were asked to perform the tasks as quickly and as accurately as possible. Data were analysed using SPSS version 21. The median error rate for completion of all four tasks with the robot was 2.75 and 5.25 for 3D with a P value performance time for completion of all four tasks with the robot was 157.1 and 342.5 s for 3D with a P value 3D robotic systems over 3D straight-stick laparoscopy, in terms of reduced error rate and quicker task performance time.

  3. Demonstration of coherent Doppler lidar for navigation in GPS-denied environments

    Science.gov (United States)

    Amzajerdian, Farzin; Hines, Glenn D.; Pierrottet, Diego F.; Barnes, Bruce W.; Petway, Larry B.; Carson, John M.

    2017-05-01

    A coherent Doppler lidar has been developed to address NASA's need for a high-performance, compact, and cost-effective velocity and altitude sensor onboard its landing vehicles. Future robotic and manned missions to solar system bodies require precise ground-relative velocity vector and altitude data to execute complex descent maneuvers and safe, soft landing at a pre-designated site. This lidar sensor, referred to as a Navigation Doppler Lidar (NDL), meets the required performance of the landing missions while complying with vehicle size, mass, and power constraints. Operating from up to four kilometers altitude, the NDL obtains velocity and range precision measurements reaching 2 cm/sec and 2 meters, respectively, dominated by the vehicle motion. Terrestrial aerial vehicles will also benefit from NDL data products as enhancement or replacement to GPS systems when GPS is unavailable or redundancy is needed. The NDL offers a viable option to aircraft navigation in areas where the GPS signal can be blocked or jammed by intentional or unintentional interference. The NDL transmits three laser beams at different pointing angles toward the ground to measure range and velocity along each beam using a frequency modulated continuous wave (FMCW) technique. The three line-of-sight measurements are then combined in order to determine the three components of the vehicle velocity vector and its altitude relative to the ground. This paper describes the performance and capabilities that the NDL demonstrated through extensive ground tests, helicopter flight tests, and onboard an autonomous rocket-powered test vehicle while operating in closedloop with a guidance, navigation, and control (GN and C) system.

  4. Formations of Robotic Swarm: An Artificial Force Based Approach

    Directory of Open Access Journals (Sweden)

    Samitha W. Ekanayake

    2009-03-01

    Full Text Available Cooperative control of multiple mobile robots is an attractive and challenging problem which has drawn considerable attention in the recent past. This paper introduces a scalable decentralized control algorithm to navigate a group of mobile robots (swarm into a predefined shape in 2D space. The proposed architecture uses artificial forces to control mobile agents into the shape and spread them inside the shape while avoiding inter-member collisions. The theoretical analysis of the swarm behavior describes the motion of the complete swarm and individual members in relevant situations. We use computer simulated case studies to verify the theoretical assertions and to demonstrate the robustness of the swarm under external disturbances such as death of agents, change of shape etc. Also the performance of the proposed distributed swarm control architecture was investigated in the presence of realistic implementation issues such as localization errors, communication range limitations, boundedness of forces etc.

  5. Formations of Robotic Swarm: An Artificial Force Based Approach

    Directory of Open Access Journals (Sweden)

    Samitha W. Ekanayake

    2010-09-01

    Full Text Available Cooperative control of multiple mobile robots is an attractive and challenging problem which has drawn considerable attention in the recent past. This paper introduces a scalable decentralized control algorithm to navigate a group of mobile robots (swarm into a predefined shape in 2D space. The proposed architecture uses artificial forces to control mobile agents into the shape and spread them inside the shape while avoiding inter-member collisions. The theoretical analysis of the swarm behavior describes the motion of the complete swarm and individual members in relevant situations. We use computer simulated case studies to verify the theoretical assertions and to demonstrate the robustness of the swarm under external disturbances such as death of agents, change of shape etc. Also the performance of the proposed distributed swarm control architecture was investigated in the presence of realistic implementation issues such as localization errors, communication range limitations, boundedness of forces etc.

  6. Formations of Robotic Swarm: An Artificial Force Based Approach

    Directory of Open Access Journals (Sweden)

    Samitha W. Ekanayake

    2009-03-01

    Full Text Available Cooperative control of multiple mobile robots is an attractive and challenging problem which has drawn considerable attention in the recent past. This paper introduces a scalable decentralized control algorithm to navigate a group of mobile robots (swarm into a predefined shape in 2D space. The proposed architecture uses artificial forces to control mobile agents into the shape and spread them inside the shape while avoiding inter- member collisions. The theoretical analysis of the swarm behavior describes the motion of the complete swarm and individual members in relevant situations. We use computer simulated case studies to verify the theoretical assertions and to demonstrate the robustness of the swarm under external disturbances such as death of agents, change of shape etc. Also the performance of the proposed distributed swarm control architecture was investigated in the presence of realistic implementation issues such as localization errors, communication range limitations, boundedness of forces etc.

  7. Formations of Robotic Swarm: An Artificial Force Based Approach

    Directory of Open Access Journals (Sweden)

    Samitha W. Ekanayake

    2010-09-01

    Full Text Available Cooperative control of multiple mobile robots is an attractive and challenging problem which has drawn considerable attention in the recent past. This paper introduces a scalable decentralized control algorithm to navigate a group of mobile robots (swarm into a predefined shape in 2D space. The proposed architecture uses artificial forces to control mobile agents into the shape and spread them inside the shape while avoiding inter- member collisions. The theoretical analysis of the swarm behavior describes the motion of the complete swarm and individual members in relevant situations. We use computer simulated case studies to verify the theoretical assertions and to demonstrate the robustness of the swarm under external disturbances such as death of agents, change of shape etc. Also the performance of the proposed distributed swarm control architecture was investigated in the presence of realistic implementation issues such as localization errors, communication range limitations, boundedness of forces etc.

  8. Two-Armed, Mobile, Sensate Research Robot

    Science.gov (United States)

    Engelberger, J. F.; Roberts, W. Nelson; Ryan, David J.; Silverthorne, Andrew

    2004-01-01

    The Anthropomorphic Robotic Testbed (ART) is an experimental prototype of a partly anthropomorphic, humanoid-size, mobile robot. The basic ART design concept provides for a combination of two-armed coordination, tactility, stereoscopic vision, mobility with navigation and avoidance of obstacles, and natural-language communication, so that the ART could emulate humans in many activities. The ART could be developed into a variety of highly capable robotic assistants for general or specific applications. There is especially great potential for the development of ART-based robots as substitutes for live-in health-care aides for home-bound persons who are aged, infirm, or physically handicapped; these robots could greatly reduce the cost of home health care and extend the term of independent living. The ART is a fully autonomous and untethered system. It includes a mobile base on which is mounted an extensible torso topped by a head, shoulders, and two arms. All subsystems of the ART are powered by a rechargeable, removable battery pack. The mobile base is a differentially- driven, nonholonomic vehicle capable of a speed >1 m/s and can handle a payload >100 kg. The base can be controlled manually, in forward/backward and/or simultaneous rotational motion, by use of a joystick. Alternatively, the motion of the base can be controlled autonomously by an onboard navigational computer. By retraction or extension of the torso, the head height of the ART can be adjusted from 5 ft (1.5 m) to 6 1/2 ft (2 m), so that the arms can reach either the floor or high shelves, or some ceilings. The arms are symmetrical. Each arm (including the wrist) has a total of six rotary axes like those of the human shoulder, elbow, and wrist joints. The arms are actuated by electric motors in combination with brakes and gas-spring assists on the shoulder and elbow joints. The arms are operated under closed-loop digital control. A receptacle for an end effector is mounted on the tip of the wrist and

  9. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  10. Robotic inspection of nuclear waste storage facilities

    International Nuclear Information System (INIS)

    Fulbright, R.; Stephens, L.M.

    1995-01-01

    The University of South Carolina and the Westinghouse Savannah River Company have developed a prototype mobile robot designed to perform autonomous inspection of nuclear waste storage facilities. The Stored Waste Autonomous Mobile Inspector (SWAMI) navigates and inspects rows of nuclear waste storage drums, in isles as narrow as 34 inches with drums stacked three high on each side. SWAMI reads drum barcodes, captures drum images, and monitors floor-level radiation levels. The topics covered in this article reporting on SWAMI include the following: overall system design; typical mission scenario; barcode reader subsystem; video subsystem; radiation monitoring subsystem; position determination subsystem; onboard control system hardware; software development environment; GENISAS, a C++ library; MOSAS, an automatic code generating tool. 10 figs

  11. Performing mathematics activities with non-standard units of measurement using robots controlled via speech-generating devices: three case studies.

    Science.gov (United States)

    Adams, Kim D; Cook, Albert M

    2017-07-01

    Purpose To examine how using a Lego robot controlled via a speech-generating device (SGD) can contribute to how students with physical and communication impairments perform hands-on and communicative mathematics measurement activities. This study was a follow-up to a previous study. Method Three students with cerebral palsy used the robot to measure objects using non-standard units, such as straws, and then compared and ordered the objects using the resulting measurement. Their performance was assessed, and the manipulation and communication events were observed. Teachers and education assistants were interviewed regarding robot use. Results Similar benefits to the previous study were found in this study. Gaps in student procedural knowledge were identified such as knowing to place measurement units tip-to-tip, and students' reporting revealed gaps in conceptual understanding. However, performance improved with repeated practice. Stakeholders identified that some robot tasks took too long or were too difficult to perform. Conclusions Having access to both their SGD and a robot gave the students multiple ways to show their understanding of the measurement concepts. Though they could participate actively in the new mathematics activities, robot use is most appropriate in short tasks requiring reasonable operational skill. Implications for Rehabilitation Lego robots controlled via speech-generating devices (SGDs) can help students to engage in the mathematics pedagogy of performing hands-on activities while communicating about concepts. Students can "show what they know" using the Lego robots, and report and reflect on concepts using the SGD. Level 1 and Level 2 mathematics measurement activities have been adapted to be accomplished by the Lego robot. Other activities can likely be accomplished with similar robot adaptations (e.g., gripper, pen). It is not recommended to use the robot to measure items that are long, or perform measurements that require high

  12. The use of time-of-flight camera for navigating robots in computer-aided surgery: monitoring the soft tissue envelope of minimally invasive hip approach in a cadaver study.

    Science.gov (United States)

    Putzer, David; Klug, Sebastian; Moctezuma, Jose Luis; Nogler, Michael

    2014-12-01

    Time-of-flight (TOF) cameras can guide surgical robots or provide soft tissue information for augmented reality in the medical field. In this study, a method to automatically track the soft tissue envelope of a minimally invasive hip approach in a cadaver study is described. An algorithm for the TOF camera was developed and 30 measurements on 8 surgical situs (direct anterior approach) were carried out. The results were compared to a manual measurement of the soft tissue envelope. The TOF camera showed an overall recognition rate of the soft tissue envelope of 75%. On comparing the results from the algorithm with the manual measurements, a significant difference was found (P > .005). In this preliminary study, we have presented a method for automatically recognizing the soft tissue envelope of the surgical field in a real-time application. Further improvements could result in a robotic navigation device for minimally invasive hip surgery. © The Author(s) 2014.

  13. Environmental mobile robot based on artificial intelligence and visual perception for weed elimination

    Directory of Open Access Journals (Sweden)

    Nabeel Kadim Abid AL-SAHIB

    2012-12-01

    Full Text Available This research presents a new editing design for the pioneer p3-dx mobile robot by adding a mechanical gripper for eliminating the weed and a digital camera for capturing the image of the field. Also, a wireless kit that makes control on the motor's gripper is envisaged. This work consists of two parts. The theoretical part contains a program to read the image and discover the weed coordinates which will be sent to the path planning software to discover the locations of weed, green plant and sick plant. These positions are sent then to the mobile robot navigation software. Then the wireless signal is sent to the gripper. The experimental part is represented as a digital camera that takes an image of the agricultural field, and then sends it to the computer for processing. After that the weeds coordinates are sent to the mobile robots by mobile robot navigation software. Next, the wireless signal is sent to the wireless kit controlling the motor gripper by the computer interface program; the first trial on the agricultural field shows that the mobile robot can discriminate the green plant, from weed and sick plant and can take the right decision with respect to treatment or elimination. The experimental work shows that the environmental mobile robot can detect successfully the weed, sick plant and the hale plant. The mobile robot also travels from base to the target point represented by the weed and sick plants in the optimum path. The experimental work also shows that the environmental mobile robot can eliminate the weed and cure the sick plants in a correctly way.

  14. Computer-based laparoscopic and robotic surgical simulators: performance characteristics and perceptions of new users.

    Science.gov (United States)

    Lin, David W; Romanelli, John R; Kuhn, Jay N; Thompson, Renee E; Bush, Ron W; Seymour, Neal E

    2009-01-01

    This study aimed to define perceptions of the need and the value of new simulation devices for laparoscopic and robot-assisted surgery. The initial experience of surgeons using both robotic and nonrobotic laparoscopic simulators to perform an advanced laparoscopic skill was evaluated. At the 2006 Society of American Gastroesophageal Surgeons (SAGES) meeting, 63 Learning Center attendees used a new virtual reality robotic surgery simulator (SEP Robot) and either a computer-enhanced laparoscopic simulator (ProMIS) or a virtual reality simulator (SurgicalSIM). Demographic and training data were collected by an intake survey. Subjects then were assessed during one iteration of laparoscopic suturing and knot-tying on the SEP Robot and either the ProMIS or the SurgicalSIM. A posttask survey determined users' impressions of task realism, interface quality, and educational value. Performance data were collected and comparisons made between user-defined groups, different simulation platforms, and posttask survey responses. The task completion rate was significantly greater for experts than for nonexperts on the virtual reality platforms (SurgicalSIM: 100% vs 36%; SEP Robot: 93% vs 63%; p platforms, whereas simulator metrics best discriminated expertise for the videoscopic platform. Similar comparisons for the virtual reality platforms were not feasible because of the low task completion rate for nonexperts. The added degrees of freedom associated with the robotic surgical simulator instruments facilitated completion of the task by nonexperts. All platforms were perceived as effective training tools.

  15. Gait performance and foot pressure distribution during wearable robot-assisted gait in elderly adults.

    Science.gov (United States)

    Lee, Su-Hyun; Lee, Hwang-Jae; Chang, Won Hyuk; Choi, Byung-Ok; Lee, Jusuk; Kim, Jeonghun; Ryu, Gyu-Ha; Kim, Yun-Hee

    2017-11-28

    A robotic exoskeleton device is an intelligent system designed to improve gait performance and quality of life for the wearer. Robotic technology has developed rapidly in recent years, and several robot-assisted gait devices were developed to enhance gait function and activities of daily living in elderly adults and patients with gait disorders. In this study, we investigated the effects of the Gait-enhancing Mechatronic System (GEMS), a new wearable robotic hip-assist device developed by Samsung Electronics Co, Ltd., Korea, on gait performance and foot pressure distribution in elderly adults. Thirty elderly adults who had no neurological or musculoskeletal abnormalities affecting gait participated in this study. A three-dimensional (3D) motion capture system, surface electromyography and the F-Scan system were used to collect data on spatiotemporal gait parameters, muscle activity and foot pressure distribution under three conditions: free gait without robot assistance (FG), robot-assisted gait with zero torque (RAG-Z) and robot-assisted gait (RAG). We found increased gait speed, cadence, stride length and single support time in the RAG condition. Reduced rectus femoris and medial gastrocnemius muscle activity throughout the terminal stance phase and reduced effort of the medial gastrocnemius muscle throughout the pre-swing phase were also observed in the RAG condition. In addition, walking with the assistance of GEMS resulted in a significant increase in foot pressure distribution, specifically in maximum force and peak pressure of the total foot, medial masks, anterior masks and posterior masks. The results of the present study reveal that GEMS may present an alternative way of restoring age-related changes in gait such as gait instability with muscle weakness, reduced step force and lower foot pressure in elderly adults. In addition, GEMS improved gait performance by improving push-off power and walking speed and reducing muscle activity in the lower

  16. A Comparative Study of Biologically Inspired Walking Gaits through Waypoint Navigation

    Directory of Open Access Journals (Sweden)

    Umar Asif

    2011-01-01

    Full Text Available This paper investigates the locomotion of a walking robot by delivering a comparative study of three different biologically inspired walking gaits, namely: tripod, ripple, and wave, in terms of ground slippage they experience while walking. The objective of this study is to identify the gait model which experiences the minimum slippage while walking on a ground with a specific coefficient of friction. To accomplish this feat, the robot is steered over a reference path using a waypoint navigation algorithm, and the divergence of the robot from the reference path is investigated in terms of slip errors. Experiments are conducted through closed-loop simulations using an open dynamics engine which emphasizes the fact that due to uneven and unsymmetrical distribution of payload in tripod and ripple gait models, the robot experiences comparatively larger drift in these gaits than when using the wave gait model in which the distribution of payload is even and symmetrical on both sides of the robot body. The paper investigates this phenomenon on the basis of force distribution of supporting legs in each gait model.

  17. Architectural Design for a Mars Communications and Navigation Orbital Infrastructure

    Science.gov (United States)

    Ceasrone R. J.; Hastrup, R. C.; Bell, D. J.; Roncoli, R. B.; Nelson, K.

    1999-01-01

    The planet Mars has become the focus of an intensive series of missions that span decades of time, a wide array of international agencies and an evolution from robotics to humans. The number of missions to Mars at any one time, and over a period of time, is unprecedented in the annals of space exploration. To meet the operational needs of this exploratory fleet will require the implementation of new architectural concepts for communications and navigation. To this end, NASA's Jet Propulsion Laboratory has begun to define and develop a Mars communications and navigation orbital infrastructure. This architecture will make extensive use of assets at Mars, as well as use of traditional Earth-based assets, such as the Deep Space Network, DSN. Indeed, the total system can be thought of as an extension of DSN nodes and services to the Mars in-situ region. The concept has been likened to the beginnings of an interplanetary Internet that will bring the exploration of Mars right into our living rooms. The paper will begin with a high-level overview of the concept for the Mars communications and navigation infrastructure. Next, the mission requirements will be presented. These will include the relatively near-term needs of robotic landers, rovers, ascent vehicles, balloons, airplanes, and possibly orbiting, arriving and departing spacecraft. Requirements envisioned for the human exploration of Mars will also be described. The important Mars orbit design trades on telecommunications and navigation capabilities will be summarized, and the baseline infrastructure will be described. A roadmap of NASA's plan to evolve this infrastructure over time will be shown. Finally, launch considerations and delivery to Mars will be briefly treated.

  18. Synthesis of a Controller for Swarming Robots Performing Underwater Mine Countermeasures

    National Research Council Canada - National Science Library

    Tan, Yong

    2004-01-01

    This Trident Scholar project involved the synthesis of a swarm controller that is suitable for controlling movements of a group of autonomous robots performing underwater mine countermeasures (UMCM...

  19. Information Fields Navigation with Piece-Wise Polynomial Approximation for High-Performance OFDM in WSNs

    Directory of Open Access Journals (Sweden)

    Wei Wei

    2013-01-01

    Full Text Available Since Wireless sensor networks (WSNs are dramatically being arranged in mission-critical applications,it changes into necessary that we consider application requirements in Internet of Things. We try to use WSNs to assist information query and navigation within a practical parking spaces environment. Integrated with high-performance OFDM by piece-wise polynomial approximation, we present a new method that is based on a diffusion equation and a position equation to accomplish the navigation process conveniently and efficiently. From the point of view of theoretical analysis, our jobs hold the lower constraint condition and several inappropriate navigation can be amended. Information diffusion and potential field are introduced to reach the goal of accurate navigation and gradient descent method is applied in the algorithm. Formula derivations and simulations manifest that the method facilitates the solution of typical sensor network configuration information navigation. Concurrently, we also treat channel estimation and ICI mitigation for very high mobility OFDM systems, and the communication is between a BS and mobile target at a terrible scenario. The scheme proposed here combines the piece-wise polynomial expansion to approximate timevariations of multipath channels. Two near symbols are applied to estimate the first-and second-order parameters. So as to improve the estimation accuracy and mitigate the ICI caused by pilot-aided estimation, the multipath channel parameters were reestimated in timedomain employing the decided OFDM symbol. Simulation results show that this method would improve system performance in a complex environment.

  20. Research on the inspection robot for cable tunnel

    Science.gov (United States)

    Xin, Shihao

    2017-03-01

    Robot by mechanical obstacle, double end communication, remote control and monitoring software components. The mechanical obstacle part mainly uses the tracked mobile robot mechanism, in order to facilitate the design and installation of the robot, the other auxiliary swing arm; double side communication part used a combination of communication wire communication with wireless communication, great improve the communication range of the robot. When the robot is controlled by far detection range, using wired communication control, on the other hand, using wireless communication; remote control part mainly completes the inspection robot walking, navigation, positioning and identification of cloud platform control. In order to improve the reliability of its operation, the preliminary selection of IPC as the control core the movable body selection program hierarchical structure as a design basis; monitoring software part is the core part of the robot, which has a definite diagnosis Can be instead of manual simple fault judgment, instead the robot as a remote actuators, staff as long as the remote control can be, do not have to body at the scene. Four parts are independent of each other but are related to each other, the realization of the structure of independence and coherence, easy maintenance and coordination work. Robot with real-time positioning function and remote control function, greatly improves the IT operation. Robot remote monitor, to avoid the direct contact with the staff and line, thereby reducing the accident casualties, for the safety of the inspection work has far-reaching significance.