WorldWideScience

Sample records for networked robotic cameras

  1. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  2. Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone

    Science.gov (United States)

    Siregar, B.; Purba, H. A.; Efendi, S.; Fahmi, F.

    2017-03-01

    Fire disasters can occur anytime and result in high losses. It is often that fire fighters cannot access the source of fire due to the damage of building and very high temperature, or even due to the presence of explosive materials. With such constraints and high risk in the handling of the fire, a technological breakthrough that can help fighting the fire is necessary. Our paper proposed the use of robots to extinguish the fire that can be controlled from a specified distance in order to reduce the risk. A fire extinguisher robot was assembled with the intention to extinguish the fire by using a water pump as actuators. The robot movement was controlled using Android smartphones via Wi-fi networks utilizing Wi-fi module contained in the robot. User commands were sent to the microcontroller on the robot and then translated into robotic movement. We used ATMega8 as main microcontroller in the robot. The robot was equipped with cameras and ultrasonic sensors. The camera played role in giving feedback to user and in finding the source of fire. Ultrasonic sensors were used to avoid collisions during movement. Feedback provided by camera on the robot displayed on a screen of smartphone. In lab, testing environment the robot can move following the user command such as turn right, turn left, forward and backward. The ultrasonic sensors worked well that the robot can be stopped at a distance of less than 15 cm. In the fire test, the robot can perform the task properly to extinguish the fire.

  3. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  4. Self-organized multi-camera network for a fast and easy deployment of ubiquitous robots in unknown environments.

    Science.gov (United States)

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2012-12-27

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.

  5. Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    Science.gov (United States)

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2013-01-01

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604

  6. Airborne Network Camera Standard

    Science.gov (United States)

    2015-06-01

    Optical Systems Group Document 466-15 AIRBORNE NETWORK CAMERA STANDARD DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE...Airborne Network Camera Standard 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...without the focus of standardization for interoperable command and control, storage, and data streaming has been the airborne network camera systems used

  7. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  8. Using a Robotic Camera in the Classroom.

    Science.gov (United States)

    Groom, Frank M.; Bellaver, Richard

    1997-01-01

    Describes how two professors used a robotic camera to tape their classes and improve their teaching style. The camera was found to be unobtrusive and useful in providing feedback; both professors adjusted their materials and delivery approach as a result. Tapes were used in later classes and for students who missed classes. (AEF)

  9. A Lane Following Mobile Robot Navigation System Using Mono Camera

    Science.gov (United States)

    Cho, Yeongcheol; Kim, Seungwoo; Park, Seongkeun

    2017-02-01

    In this paper, we develop a lane following mobile robot using mono camera. By using camera, robot can recognize its left and right side lane, and maintain the center line of robot track. We use Hough Transform for detecting lane, and PID controller for control direction of mobile robot. The validity of our robot system is performed in a real world robot track environment which is built up in our laboratory.

  10. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  11. Structured Tracking for Safety, Security, and Privacy: Algorithms for Fusing Noisy Estimates from Sensor, Robot, and Camera Networks

    Science.gov (United States)

    2009-07-23

    Data gathering tours in sensor networks,” in Proceedings on Information Processing in Sensor Networks (IPSN), April 2006. [129] T. Meltzer, C...Shrobe, and A. Grue , “Simultaneous localization, calibration, and tracking in an ad hoc sensor network,” in Pro- ceedings of Information processing in

  12. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  13. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  14. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Science.gov (United States)

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  15. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    A working group (WG) study was conducted aiming at realizing human type robots. The following six working groups in the basement field were organized to study in terms mostly of items of technical development and final technical targets: platform, and remote attendance control in the basement field, maintenance of plant, etc., home service, disaster/construction, and entertainment in the application field. In the platform WG, a robot of human like form is planning which walks with two legs and works with two arms, and the following were discussed: a length of 160cm, weight of 110kg, built-in LAN, actuator specifications, modulated structure, intelligent driver, etc. In the remote attendance control WG, remote control using working function, stabilized movement, stabilized control, and network is made possible. Studied were made on the decision on a remote control cockpit by open architecture added with function and reformable, problems on the development of the standard language, etc. 77 ref., 82 figs., 21 tabs.

  16. A Mobile Robot Localization via Indoor Fixed Remote Surveillance Cameras.

    Science.gov (United States)

    Shim, Jae Hong; Cho, Young Im

    2016-02-04

    Localization, which is a technique required by service robots to operate indoors, has been studied in various ways. Most localization techniques have the robot measure environmental information to obtain location information; however, this is a high-cost option because it uses extensive equipment and complicates robot development. If an external device is used to determine a robot's location and transmit this information to the robot, the cost of internal equipment required for location recognition can be reduced. This will simplify robot development. Thus, this study presents an effective method to control robots by obtaining their location information using a map constructed by visual information from surveillance cameras installed indoors. With only a single image of an object, it is difficult to gauge its size due to occlusion. Therefore, we propose a localization method using several neighboring surveillance cameras. A two-dimensional map containing robot and object position information is constructed using images of the cameras. The concept of this technique is based on modeling the four edges of the projected image of the field of coverage of the camera and an image processing algorithm of the finding object's center for enhancing the location estimation of objects of interest. We experimentally demonstrate the effectiveness of the proposed method by analyzing the resulting movement of a robot in response to the location information obtained from the two-dimensional map. The accuracy of the multi-camera setup was measured in advance.

  17. View from Above of Phoenix's Stowed Robotic Arm Camera

    Science.gov (United States)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation This artist's animation of an imaginary camera zooming in from above shows the location of the Robotic Arm Camera on NASA's Phoenix Mars Lander as it acquires an image of the scoop at the end of the arm. Located just beneath the Robotic Arm Camera lens, the scoop is folded in the stowed position, with its open end facing the Robotic Arm Camera. The last frame in the animation shows the first image taken by the Robotic Arm Camera, one day after Phoenix landed on Mars. In the center of the image is the robotic scoop the lander will use to dig into the surface, collect samples and touch water ice on Mars for the first time. The scoop is in the stowed position, awaiting deployment of the robotic arm. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. Camera space control system for a mobile robot forklift

    Science.gov (United States)

    Miller, Richard K.; Stewart, D. G.; Brockman, W. H.; Skaar, Steven B.

    1993-05-01

    In this paper we present the method of camera space manipulation for control of a mobile cart with an on-board robot. The objective is to do three dimensional object placement. The robot- cart system is operated as a forklift. The cart has a rear wheel for steering and driving, two front wheels, and a tether allowing control from a remote computer. Two remotely placed CCTV cameras provide images for use by the control system. The method is illustrated experimentally by a box stacking task. None of the components-cameras, robot-cart, or target box are prepositioned. 'Ring cues' are placed on both boxes in order to simplify the image processing. A sequential estimation scheme solves the placement problem. This scheme produces the control necessary to place the image of the grasped box at the relevant target image position in each of the two dimensional camera planes. This results in a precise and robust manipulation strategy.

  19. Mobile in vivo biopsy and camera robot.

    Science.gov (United States)

    Rentschler, Mark E; Dumpert, Jason; Platt, Stephen R; Farritor, Shane M; Oleynikov, Dmitry

    2006-01-01

    A mobile in vivo biopsy robot has been developed to perform a biopsy from within the abdominal cavity while being remotely controlled. This robot provides a platform for effectively sampling tissue. The robot has been used in vivo in a porcine model to biopsy portions of the liver and mucosa layer of the bowel. After reaching the specified location, the grasper was actuated to biopsy the tissue of interest. The biopsy specimens were gathered from the grasper after robot retraction from the abdominal cavity. This paper outlines the steps towards the successful design of an in vivo biopsy robot. The clamping forces required for successful biopsy are presented and in vivo performance of this robot is addressed.

  20. Positioning the laparoscopic camera with industrial robot arm

    DEFF Research Database (Denmark)

    Capolei, Marie Claire; Wu, Haiyan; Andersen, Nils Axel

    2017-01-01

    This paper introduces a solution for the movement control of the laparoscopic camera employing a teleoperated robotic assistant. The project propose an autonomous robotic solution based on an industrial manipulator, provided with a modular software which is applicable to large scale. The robot arm...... industrial robot arm is designated to accomplish this manipulation task. The software is implemented in ROS in order to facilitate future extensions. The experimental results shows a manipulator capable of moving fast and smoothly the surgical tool around a remote center of motion....

  1. Coordinated Sensing in Intelligent Camera Networks

    OpenAIRE

    Ding, Chong

    2013-01-01

    The cost and size of video sensors has led to camera networks becoming pervasive in our lives. However, the ability to analyze these images efficiently is very much a function of the quality of the acquired images. Human control of pan-tilt-zoom (PTZ) cameras is impractical and unreliable when high quality images are needed of multiple events distributed over a large area. This dissertation considers the problem of automatically controlling the fields of view of individual cameras in a camera...

  2. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  3. Wireless cellular control of time-shared robotic web cameras

    Science.gov (United States)

    Abrams, David H.; Prokopowicz, Peter N.

    2003-01-01

    We present a novel user-interface and distributed imaging system for controlling robotic web-cameras via a wireless cellular phone. A user scrolls an image canvas to select a new live picture. The cellular phone application (a Java MIDlet) sends a URL request, which encodes the new pan/tilt/optical-zoom of a live picture, to a web-camera server. The user downloads a new live picture centered on the user"s new viewpoint. The web-camera server mediates requests from users, by time-sharing control of the physical robotic hardware. By processing a queue of user requests at different pan/tilt/zoom locations, the server can capture a single photograph for each user. While one user downloads a new live image, the robotic camera moves to capture images and service other independent user requests. The end-to-end system enables each user to independently steer the robotic camera, viewing live snapshot pictures from a cellular phone.

  4. Vision System of Mobile Robot Combining Binocular and Depth Cameras

    Directory of Open Access Journals (Sweden)

    Yuxiang Yang

    2017-01-01

    Full Text Available In order to optimize the three-dimensional (3D reconstruction and obtain more precise actual distances of the object, a 3D reconstruction system combining binocular and depth cameras is proposed in this paper. The whole system consists of two identical color cameras, a TOF depth camera, an image processing host, a mobile robot control host, and a mobile robot. Because of structural constraints, the resolution of TOF depth camera is very low, which difficultly meets the requirement of trajectory planning. The resolution of binocular stereo cameras can be very high, but the effect of stereo matching is not ideal for low-texture scenes. Hence binocular stereo cameras also difficultly meet the requirements of high accuracy. In this paper, the proposed system integrates depth camera and stereo matching to improve the precision of the 3D reconstruction. Moreover, a double threads processing method is applied to improve the efficiency of the system. The experimental results show that the system can effectively improve the accuracy of 3D reconstruction, identify the distance from the camera accurately, and achieve the strategy of trajectory planning.

  5. Performance of Very Small Robotic Fish Equipped with CMOS Camera

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    2015-10-01

    Full Text Available Underwater robots are often used to investigate marine animals. Ideally, such robots should be in the shape of fish so that they can easily go unnoticed by aquatic animals. In addition, lacking a screw propeller, a robotic fish would be less likely to become entangled in algae and other plants. However, although such robots have been developed, their swimming speed is significantly lower than that of real fish. Since to carry out a survey of actual fish a robotic fish would be required to follow them, it is necessary to improve the performance of the propulsion system. In the present study, a small robotic fish (SAPPA was manufactured and its propulsive performance was evaluated. SAPPA was developed to swim in bodies of freshwater such as rivers, and was equipped with a small CMOS camera with a wide-angle lens in order to photograph live fish. The maximum swimming speed of the robot was determined to be 111 mm/s, and its turning radius was 125 mm. Its power consumption was as low as 1.82 W. During trials, SAPPA succeeded in recognizing a goldfish and capturing an image of it using its CMOS camera.

  6. Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

    Directory of Open Access Journals (Sweden)

    Miguel A. Trujano

    2012-10-01

    Full Text Available This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin-Krassovsky-LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed-loop performance.

  7. Calibration of an outdoor distributed camera network with a 3D point cloud.

    Science.gov (United States)

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-07-29

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).

  8. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Agustín Ortega

    2014-07-01

    Full Text Available Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies between image coordinates and world points in the ground plane (walking areas to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME patio at the Universitat Politècnica de Catalunya (UPC.

  9. Cooperative robots and sensor networks

    CERN Document Server

    Khelil, Abdelmajid

    2014-01-01

    Mobile robots and Wireless Sensor Networks (WSNs) have enabled great potentials and a large space for ubiquitous and pervasive applications. Robotics and WSNs have mostly been considered as separate research fields and little work has investigated the marriage between these two technologies. However, these two technologies share several features, enable common cyber-physical applications and provide complementary support to each other.
 The primary objective of book is to provide a reference for cutting-edge studies and research trends pertaining to robotics and sensor networks, and in particular for the coupling between them. The book consists of five chapters. The first chapter presents a cooperation strategy for teams of multiple autonomous vehicles to solve the rendezvous problem. The second chapter is motivated by the need to improve existing solutions that deal with connectivity prediction, and proposed a genetic machine learning approach for link-quality prediction. The third chapter presents an arch...

  10. Depth camera driven mobile robot for human localization and following

    DEFF Research Database (Denmark)

    Skordilis, Nikolaos; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2014-01-01

    In this paper the design and the development of a mobile robot able to locate and then follow a human target is described. Both the integration of the required mechatronics components and the development of appropriate software are covered. The main sensor of the developed mobile robot is an RGB...... applied to data captured by a mobile platform. This work proposes the use of a special-tailored feed forward neural network to further process the initial detections, identifying and rejecting most false positives. Experimental results based on two self-captured data sets show the improved detection rate...

  11. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    Science.gov (United States)

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Distributed Estimation and Control for Robotic Networks

    NARCIS (Netherlands)

    Simonetto, A.

    2012-01-01

    Mobile robots that communicate and cooperate to achieve a common task have been the subject of an increasing research interest in recent years. These possibly heterogeneous groups of robots communicate locally via a communication network and therefore are usually referred to as robotic networks.

  13. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  14. Ocean Robotic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Schofield, Oscar [Rutgers University

    2012-05-23

    We live on an ocean planet which is central to regulating the Earth’s climate and human society. Despite the importance of understanding the processes operating in the ocean, it remains chronically undersampled due to the harsh operating conditions. This is problematic given the limited long term information available about how the ocean is changing. The changes include rising sea level, declining sea ice, ocean acidification, and the decline of mega fauna. While the changes are daunting, oceanography is in the midst of a technical revolution with the expansion of numerical modeling techniques, combined with ocean robotics. Operating together, these systems represent a new generation of ocean observatories. I will review the evolution of these ocean observatories and provide a few case examples of the science that they enable, spanning from the waters offshore New Jersey to the remote waters of the Southern Ocean.

  15. A Unified Robotic Software Architecture for Service Robotics and Networks of Smart Sensors

    Science.gov (United States)

    Westhoff, Daniel; Zhang, Jianwei

    This paper proposes a novel architecture for the programming of multi-modal service robots and networked sensors. The presented software framework eases the development of high-level applications for distributed systems. The software architecture is based upon the Roblet-Technology, which is an exceptionally powerful medium in robotics. The possibility to develop, compile and execute an application on one workstation and distribute parts of a program based on the idea of mobile code is pointed out. Since the Roblet-Technology uses Java the development is independent of the operation system. The framework hides the network communication and therefore greatly improves the programming and testing of applications in service robotics. The concept is evaluated in the context of the service robot TASER of the TAMS Institute at the University of Hamburg. This robot consists of a mobile platform with two manipulators equipped with artificial hands. Several multimodal input and output devices for interaction round off the robot. Networked cameras in the working environment of TASER provide additional information to the robot. The integration of these smart sensors shows the extendability of the proposed concept to general distributed systems.

  16. Robotic Arm Camera on Mars, with Lights Off

    Science.gov (United States)

    2008-01-01

    This approximate color image is a view of NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) as seen by the lander's Surface Stereo Imager (SSI). This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The RAC is about 8 centimeters (3 inches) tall. The SSI took images of the RAC to test both the light-emitting diodes (LEDs) and cover function. Individual images were taken in three SSI filters that correspond to the red, green, and blue LEDs one at a time. This yields proper coloring when imaging Phoenix's surrounding Martian environment. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  17. Wireless Visual Sensor Network Robots- Based for the Emulation of Collective Behavior

    Directory of Open Access Journals (Sweden)

    Fredy Hernán Martinez Sarmiento

    2012-03-01

    Full Text Available We consider the problem of bacterial quorum sensing emulate on small mobile robots. Robots that reflect the behavior of bacteria are designed as mobile wireless camera nodes. They are able to structure a dynamic wireless sensor network. Emulated behavior corresponds to a simplification of bacterial quorum sensing, where the action of a network node is conditioned by the population density of robots(nodes in a given area. The population density reading is done visually using a camera. The robot makes an estimate of the population density of the images, and acts according to this information. The operation of the camera is done with a custom firmware, reducing the complexity of the node without loss of performance. It was noted the route planning and the collective behavior of robots without the use of any other external or local communication. Neither was it necessary to develop a model system, precise state estimation or state feedback.

  18. Camera-Based Control for Industrial Robots Using OpenCV Libraries

    Science.gov (United States)

    Seidel, Patrick A.; Böhnke, Kay

    This paper describes a control system for industrial robots whose reactions base on the analysis of images provided by a camera mounted on top of the robot. We show that such control system can be designed and implemented with an open source image processing library and cheap hardware. Using one specific robot as an example, we demonstrate the structure of a possible control algorithm running on a PC and its interaction with the robot.

  19. Auto-preview camera orientation for environment perception on a mobile robot

    Science.gov (United States)

    Radovnikovich, Micho; Vempaty, Pavan K.; Cheok, Ka C.

    2010-01-01

    Using wide-angle or omnidirectional camera lenses to increase a mobile robot's field of view introduces nonlinearity in the image due to the 'fish-eye' effect. This complicates distance perception, and increases image processing overhead. Using multiple cameras avoids the fish-eye complications, but involves using more electrical and processing power to interface them to a computer. Being able to control the orientation of a single camera, both of these disadvantages are minimized while still allowing the robot to preview a wider area. In addition, controlling the orientation allows the robot to optimize its environment perception by only looking where the most useful information can be discovered. In this paper, a technique is presented that creates a two dimensional map of objects of interest surrounding a mobile robot equipped with a panning camera on a telescoping shaft. Before attempting to negotiate a difficult path planning situation, the robot takes snapshots at different camera heights and pan angles and then produces a single map of the surrounding area. Distance perception is performed by making calibration measurements of the camera and applying coordinate transformations to project the camera's findings into the vehicle's coordinate frame. To test the system, obstacles and lines were placed to form a chicane. Several snapshots were taken with different camera orientations, and the information from each were stitched together to yield a very useful map of the surrounding area for the robot to use to plan a path through the chicane.

  20. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  1. Cooperative robots and sensor networks 2014

    CERN Document Server

    Khelil, Abdelmajid

    2014-01-01

    This book is the second volume on Cooperative Robots and Sensor Networks. The primary objective of this book is to provide an up-to-date reference for cutting-edge studies and research trends related to mobile robots and wireless sensor networks, and in particular for the coupling between them. Indeed, mobile robots and wireless sensor networks have enabled great potentials and a large space for ubiquitous and pervasive applications. Robotics and wireless sensor networks have mostly been considered as separate research fields and little work has investigated the marriage between these two technologies. However, these two technologies share several features, enable common cyber-physical applications and provide complementary support to each other. The book consists of ten chapters, organized into four parts. The first part of the book presents three chapters related to localization of mobile robots using wireless sensor networks. Two chapters presented new solutions based Extended Kalman Filter and Particle Fi...

  2. PHOENIX MARS ROBOTIC ARM CAMERA 5 ROUGHNESS OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  3. PHOENIX MARS ROBOTIC ARM CAMERA 5 REACHABILITY OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  4. PHOENIX MARS ROBOTIC ARM CAMERA 5 XYZ OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  5. PHOENIX MARS ROBOTIC ARM CAMERA 5 ANAGLYPH OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  6. PHOENIX MARS ROBOTIC ARM CAMERA 3 RADIOMETRIC OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  7. PHOENIX MARS ROBOTIC ARM CAMERA 2 EDR VERSION 1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations EDR...

  8. PHOENIX MARS ROBOTIC ARM CAMERA 5 DISPARITY OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  9. PHOENIX MARS ROBOTIC ARM CAMERA 5 MOSAIC OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  10. PHOENIX MARS ROBOTIC ARM CAMERA 5 NORMAL OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  11. PHOENIX MARS ROBOTIC ARM CAMERA 5 RANGE OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  12. PHOENIX MARS ROBOTIC ARM CAMERA 4 LINEARIZED OPS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Operations RDR...

  13. PHOENIX MARS ROBOTIC ARM CAMERA 3 RADIOMETRIC SCI V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Robotic Arm Camera (RAC) experiment on the Mars Phoenix Lander consists of one instrument component plus command electronics. This RAC Imaging Science RDR data...

  14. Robotic Arm Camera on Mars with Lights On

    Science.gov (United States)

    2008-01-01

    This image is a composite view of NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) with its lights on, as seen by the lander's Surface Stereo Imager (SSI). This image combines images taken on the afternoon of Phoenix's 116th Martian day, or sol (September 22, 2008). The RAC is about 8 centimeters (3 inches) tall. The SSI took images of the RAC to test both the light-emitting diodes (LEDs) and cover function. Individual images were taken in three SSI filters that correspond to the red, green, and blue LEDs one at a time. When combined, it appears that all three sets of LEDs are on at the same time. This composite image is not true color. The streaks of color extending from the LEDs are an artifact from saturated exposure. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  15. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm

  16. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  17. Depth Adaptive Zooming Visual Servoing for a Robot with a Zooming Camera

    OpenAIRE

    Jing Xin; Kemin Chen; Lei Bai; Ding Liu; Jian Zhang

    2013-01-01

    To solve the view visibility problem and keep the observed object in the field of view (FOV) during the visual servoing, a depth adaptive zooming visual servoing strategy for a manipulator robot with a zooming camera is proposed. Firstly, a zoom control mechanism is introduced into the robot visual servoing system. It can dynamically adjust the camera's field of view to keep all the feature points on the object in the field of view of the camera and get high object local resolution at the end...

  18. Mobile in vivo camera robots provide sole visual feedback for abdominal exploration and cholecystectomy.

    Science.gov (United States)

    Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D

    2006-01-01

    The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.

  19. Cooperative robots and sensor networks 2015

    CERN Document Server

    Dios, JRamiro

    2015-01-01

    This book compiles some of the latest research in cooperation between robots and sensor networks. Structured in twelve chapters, this book addresses fundamental, theoretical, implementation and experimentation issues. The chapters are organized into four parts namely multi-robots systems, data fusion and localization, security and dependability, and mobility.

  20. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  1. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglova

    2008-11-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  2. ISOLDE target zone GPS robot Camera B Part1

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE GPS robot movements mainly in close up moving a target along the corridor and onto a shelf position and vice versa. Close up GPS robot handling at exchange point. Movement GPS robot with target through the corridor. Close up robot cable guidance system. Close up posing target on the shelf position. Close up picking up a target from the shelf position and passing through corridor. Picking up a target from a shelf position seen from the target front end towards the zone entrance and taking it to the exchange point and vice versa. Checking activation: GPS robot picking up a target from the shelf and moving it in front of the radiation monitor and close up.

  3. ISOLDE target zone GPS robot, Camera B Part2

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE GPS robot movements mainly in close up moving a target along the corridor and onto a shelf position and vice versa. Close up GPS robot handling at exchange point. Movement GPS robot with target through the corridor. Close up robot cable guidance system. Close up posing target on the shelf position. Close up picking up a target from the shelf position and passing through corridor. Picking up a target from a shelf position seen from the target front end towards the zone entrance and taking it to the exchange point and vice versa. Checking activation: GPS robot picking up a target from the shelf and moving it in front of the radiation monitor and close up.

  4. ISOLDE target zone GPS robot, Camera B Part2 HD

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE GPS robot movements mainly in close up moving a target along the corridor and onto a shelf position and vice versa. Close up GPS robot handling at exchange point. Movement GPS robot with target through the corridor. Close up robot cable guidance system. Close up posing target on the shelf position. Close up picking up a target from the shelf position and passing through corridor. Picking up a target from a shelf position seen from the target front end towards the zone entrance and taking it to the exchange point and vice versa. Checking activation: GPS robot picking up a target from the shelf and moving it in front of the radiation monitor and close up.

  5. ISOLDE target zone GPS robot Camera B Part1 HD

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE GPS robot movements mainly in close up moving a target along the corridor and onto a shelf position and vice versa. Close up GPS robot handling at exchange point. Movement GPS robot with target through the corridor. Close up robot cable guidance system. Close up posing target on the shelf position. Close up picking up a target from the shelf position and passing through corridor. Picking up a target from a shelf position seen from the target front end towards the zone entrance and taking it to the exchange point and vice versa. Checking activation: GPS robot picking up a target from the shelf and moving it in front of the radiation monitor and close up.

  6. Robust and Accurate Multiple-Camera Pose Estimation toward Robotic Applications

    Directory of Open Access Journals (Sweden)

    Yong Liu

    2014-09-01

    Full Text Available Pose estimation methods in robotics applications frequently suffer from inaccuracy due to a lack of correspondence and real-time constraints, and instability from a wide range of viewpoints, etc. In this paper, we present a novel approach for estimating the poses of all the cameras in a multi-camera system in which each camera is placed rigidly using only a few coplanar points simultaneously. Instead of solving the orientation and translation for the multi-camera system from the overlapping point correspondences among all the cameras directly, we employ homography, which can map image points with 3D coplanar-referenced points. In our method, we first establish the corresponding relations between each camera by their Euclidean geometries and optimize the homographies of the cameras; then, we solve the orientation and translation for the optimal homographies. The results from simulations and real case experiments show that our approach is accurate and robust for implementation in robotics applications. Finally, a practical implementation in a ping-pong robot is described in order to confirm the validity of our approach.

  7. ISOLDE target zone GPS robot, Camera A Part1

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE GPS robot movements along the corridor picking up an ISOLDE target from one of the shelfs behind the lead shielding doors and moving it to the exchange point. Several movements of the ISOLDE GPS robot from different angles with and without target along the corridor as well as posing and taking the target from the shelf and posing it onto the exchange point.

  8. ISOLDE target zone GPS robot, Camera A Part2 HD

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE GPS robot movements along the corridor picking up an ISOLDE target from one of the shelfs behind the lead shielding doors and moving it to the exchange point. Several movements of the ISOLDE GPS robot from different angles with and without target along the corridor as well as posing and taking the target from the shelf and posing it onto the exchange point.

  9. ISOLDE target zone GPS robot, Camera A Part2

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE GPS robot movements along the corridor picking up an ISOLDE target from one of the shelfs behind the lead shielding doors and moving it to the exchange point. Several movements of the ISOLDE GPS robot from different angles with and without target along the corridor as well as posing and taking the target from the shelf and posing it onto the exchange point.

  10. ISOLDE target zone GPS robot, Camera A Part1 HD

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE GPS robot movements along the corridor picking up an ISOLDE target from one of the shelfs behind the lead shielding doors and moving it to the exchange point. Several movements of the ISOLDE GPS robot from different angles with and without target along the corridor as well as posing and taking the target from the shelf and posing it onto the exchange point.

  11. Improving Robot Mobility by Combining Downward-Looking and Frontal Cameras

    Directory of Open Access Journals (Sweden)

    Ramon Gonzalez

    2016-11-01

    Full Text Available This paper presents a novel attempt to combine a downward-looking camera and a forward-looking camera for terrain classification in the field of off-road mobile robots. The first camera is employed to identify the terrain beneath the robot. This information is then used to improve the classification of the forthcoming terrain acquired from the frontal camera. This research also shows the usefulness of the Gist descriptor for terrain classification purposes. Physical experiments conducted in different terrains (quasi-planar terrains and different lighting conditions, confirm the satisfactory performance of this approach in comparison with a simple color-based classifier based only on frontal images. Our proposal substantially reduces the misclassification rate of the color-based classifier (∼10% versus ∼20%.

  12. Vision System of Mobile Robot Combining Binocular and Depth Cameras

    National Research Council Canada - National Science Library

    Yuxiang Yang; Xiang Meng; Mingyu Gao

    2017-01-01

    In order to optimize the three-dimensional (3D) reconstruction and obtain more precise actual distances of the object, a 3D reconstruction system combining binocular and depth cameras is proposed in this paper...

  13. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  14. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  15. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  16. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  17. Depth Adaptive Zooming Visual Servoing for a Robot with a Zooming Camera

    Directory of Open Access Journals (Sweden)

    Jing Xin

    2013-02-01

    Full Text Available To solve the view visibility problem and keep the observed object in the field of view (FOV during the visual servoing, a depth adaptive zooming visual servoing strategy for a manipulator robot with a zooming camera is proposed. Firstly, a zoom control mechanism is introduced into the robot visual servoing system. It can dynamically adjust the camera's field of view to keep all the feature points on the object in the field of view of the camera and get high object local resolution at the end of visual servoing. Secondly, an invariant visual servoing method is employed to control the robot to the desired position under the changing intrinsic parameters of the camera. Finally, a nonlinear depth adaptive estimation scheme in the invariant space using Lyapunov stability theory is proposed to estimate adaptively the depth of the image features on the object. Three kinds of robot 4DOF visual positioning simulation experiments are conducted. The simulation experiment results show that the proposed approach has higher positioning precision.

  18. Controlling a Robotic Stereo Camera Under Image Quantization Noise

    OpenAIRE

    Freundlich, Charles; Zhang, Yan; Zhu, Alex Zihao; Mordohai, Philippos; Zavlanos, Michael M.

    2017-01-01

    In this paper, we address the problem of controlling a mobile stereo camera under image quantization noise. Assuming that a pair of images of a set of targets is available, the camera moves through a sequence of Next-Best-Views (NBVs), i.e., a sequence of views that minimize the trace of the targets' cumulative state covariance, constructed using a realistic model of the stereo rig that captures image quantization noise and a Kalman Filter (KF) that fuses the observation history with new info...

  19. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Richard J. Radke

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length “feature digest” that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates (>0.8 can be achieved while maintaining low false alarm rates (<0.05 using a simulated 60-node outdoor camera network.

  20. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  1. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    Science.gov (United States)

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. PMID:22319297

  2. INDUSTRIAL ROBOT REPEATABILITY TESTING WITH HIGH SPEED CAMERA PHANTOM V2511

    Directory of Open Access Journals (Sweden)

    Jerzy Józwik

    2016-12-01

    Full Text Available Apart from accuracy, one of the parameters describing industrial robots is positioning accuracy. The parameter in question, which is the subject of this paper, is often the decisive factor determining whether to apply a given robot to perform certain tasks or not. Articulated robots are predominantly used in such processes as: spot weld-ing, transport of materials and other welding applications, where high positioning repeatability is required. It is therefore essential to recognise the parameter in question and to control it throughout the operation of the robot. This paper presents methodology for robot positioning accuracy measurements based on vision technique. The measurements were conducted with Phantom v2511 high-speed camera and TEMA Motion software, for motion analysis. The object of the measurements was a 6-axis Yaskawa Motoman HP20F industrial robot. The results of measurements obtained in tests provided data for the calculation of positioning accuracy of the robot, which was then juxtaposed against robot specifications. Also analysed was the impact of the direction of displacement on the value of attained pose errors. Test results are given in a graphic form.

  3. Protocols for Robotic Telescope Networks

    Directory of Open Access Journals (Sweden)

    Alain Klotz

    2010-01-01

    This paper is addressed to astronomers who are not specialists in computer science. We give explanations of some basic and advanced protocols to receive events and how to implement them in a robotic observatory software. We describe messages such as GCN notices, VOEvents or RTML, and protocols such as CGI, HTTP, SOAP, RSS, and XMPP.

  4. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  5. Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera

    Science.gov (United States)

    Rituerto, Alejandro; Andreasson, Henrik; Murillo, Ana C.; Lilienthal, Achim; Guerrero, José Jesús

    2016-01-01

    Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1) tracking information to the process of vocabulary construction and (2) geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set. PMID:27070607

  6. Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera

    Directory of Open Access Journals (Sweden)

    Alejandro Rituerto

    2016-04-01

    Full Text Available Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1 tracking information to the process of vocabulary construction and (2 geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set.

  7. Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera.

    Science.gov (United States)

    Rituerto, Alejandro; Andreasson, Henrik; Murillo, Ana C; Lilienthal, Achim; Guerrero, José Jesús

    2016-04-07

    Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1) tracking information to the process of vocabulary construction and (2) geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set.

  8. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  9. Robotic Astronomy and the BOOTES Network of Robotic Telescopes

    Directory of Open Access Journals (Sweden)

    A. J. Castro-Tirado

    2011-01-01

    Full Text Available The Burst Observer and Optical Transient Exploring System (BOOTES, started in 1998 as a Spanish-Czech collaboration project, devoted to a study of optical emissions from gamma ray bursts (GRBs that occur in the Universe. The first two BOOTES stations were located in Spain, and included medium size robotic telescopes with CCD cameras at the Cassegrain focus as well as all-sky cameras, with the two stations located 240 km apart. The first observing station (BOOTES-1 is located at ESAt (INTA-CEDEA in Mazag´on (Huelva and the first light was obtained in July 1998. The second observing station (BOOTES-2 is located at La Mayora (CSIC in M´alaga and has been operating fully since July 2001. In 2009 BOOTES expanded abroad, with the third station (BOOTES-3 being installed in Blenheim (South Island, New Zealand as result of a collaboration project with several institutions from the southern hemisphere. The fourth station (BOOTES-4 is on its way, to be deployed in 2011.

  10. Indoor SLAM Using Laser and Camera with Closed-Loop Controller for NAO Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Shuhuan Wen

    2014-01-01

    Full Text Available We present a SLAM with closed-loop controller method for navigation of NAO humanoid robot from Aldebaran. The method is based on the integration of laser and vision system. The camera is used to recognize the landmarks whereas the laser provides the information for simultaneous localization and mapping (SLAM . K-means clustering method is implemented to extract data from different objects. In addition, the robot avoids the obstacles by the avoidance function. The closed-loop controller reduces the error between the real position and estimated position. Finally, simulation and experiments show that the proposed method is efficient and reliable for navigation in indoor environments.

  11. Barrier Coverage for 3D Camera Sensor Networks.

    Science.gov (United States)

    Si, Pengju; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao

    2017-08-03

    Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder's face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks.

  12. Barrier Coverage for 3D Camera Sensor Networks

    Science.gov (United States)

    Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao

    2017-01-01

    Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder’s face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks. PMID:28771167

  13. STRAY DOG DETECTION IN WIRED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    C. Prashanth

    2013-08-01

    Full Text Available Existing surveillance systems impose high level of security on humans but lacks attention on animals. Stray dogs could be used as an alternative to humans to carry explosive material. It is therefore imperative to ensure the detection of stray dogs for necessary corrective action. In this paper, a novel composite approach to detect the presence of stray dogs is proposed. The captured frame from the surveillance camera is initially pre-processed using Gaussian filter to remove noise. The foreground object of interest is extracted utilizing ViBe algorithm. Histogram of Oriented Gradients (HOG algorithm is used as the shape descriptor which derives the shape and size information of the extracted foreground object. Finally, stray dogs are classified from humans using a polynomial Support Vector Machine (SVM of order 3. The proposed composite approach is simulated in MATLAB and OpenCV. Further it is validated with real time video feeds taken from an existing surveillance system. From the results obtained, it is found that a classification accuracy of about 96% is achieved. This encourages the utilization of the proposed composite algorithm in real time surveillance systems.

  14. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  15. An Improved Indoor Robot Human-Following Navigation Model Using Depth Camera, Active IR Marker and Proximity Sensors Fusion

    National Research Council Canada - National Science Library

    Mark Tee Kit Tsun; Bee Theng Lau; Hudyjaya Siswoyo Jo

    2018-01-01

    ... model, based on multi-sensor fusion, using Microsoft Robotics Developer Studio 4 (MRDS). The model relies on a depth camera, a limited array of proximity sensors and an active IR marker tracking system...

  16. The NASA Fireball Network All-Sky Cameras

    Science.gov (United States)

    Suggs, Rob M.

    2011-01-01

    The construction of small, inexpensive all-sky cameras designed specifically for the NASA Fireball Network is described. The use of off-the-shelf electronics, optics, and plumbing materials results in a robust and easy to duplicate design. Engineering challenges such as weather-proofing and thermal control and their mitigation are described. Field-of-view and gain adjustments to assure uniformity across the network will also be detailed.

  17. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    Science.gov (United States)

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2017-11-03

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  18. Decentralized tracking of humans using a camera network

    Science.gov (United States)

    Gruenwedel, Sebastian; Jelaca, Vedran; Niño-Castañeda, Jorge Oswaldo; Van Hese, Peter; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2012-01-01

    Real-time tracking of people has many applications in computer vision and typically requires multiple cameras; for instance for surveillance, domotics, elderly-care and video conferencing. However, this problem is very challenging because of the need to deal with frequent occlusions and environmental changes. Another challenge is to develop solutions which scale well with the size of the camera network. Such solutions need to carefully restrict overall communication in the network and often involve distributed processing. In this paper we present a distributed person tracker, addressing the aforementioned issues. Real-time processing is achieved by distributing tasks between the cameras and a fusion node. The latter fuses only high level data based on low-bandwidth input streams from the cameras. This is achieved by performing tracking first on the image plane of each camera followed by sending only metadata to a local fusion node. We designed the proposed system with respect to a low communication load and towards robustness of the system. We evaluate the performance of the tracker in meeting scenarios where persons are often occluded by other persons and/or furniture. We present experimental results which show that our tracking approach is accurate even in cases of severe occlusions in some of the views.

  19. Handling uncertainty and networked structure in robot control

    CERN Document Server

    Tamás, Levente

    2015-01-01

    This book focuses on two challenges posed in robot control by the increasing adoption of robots in the everyday human environment: uncertainty and networked communication. Part I of the book describes learning control to address environmental uncertainty. Part II discusses state estimation, active sensing, and complex scenario perception to tackle sensing uncertainty. Part III completes the book with control of networked robots and multi-robot teams. Each chapter features in-depth technical coverage and case studies highlighting the applicability of the techniques, with real robots or in simulation. Platforms include mobile ground, aerial, and underwater robots, as well as humanoid robots and robot arms. Source code and experimental data are available at http://extras.springer.com. The text gathers contributions from academic and industry experts, and offers a valuable resource for researchers or graduate students in robot control and perception. It also benefits researchers in related areas, such as computer...

  20. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    Science.gov (United States)

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M.; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T.J.

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted. PMID:22294927

  1. Comparison of the FreeHand® robotic camera holder with human assistants during endoscopic extraperitoneal radical prostatectomy.

    Science.gov (United States)

    Stolzenburg, Jens-Uwe; Franz, Toni; Kallidonis, Panagiotis; Minh, Do; Dietel, Anja; Hicks, James; Nicolaus, Martin; Al-Aown, Abdulrahman; Liatsikos, Evangelos

    2011-03-01

    • To assess, in a prospective randomized study, the efficiency of the FreeHand® (Prosurgics Ltd, Bracknell, UK) compared to manual camera control during the performance of endoscopic extraperitoneal radical prostatectomy (EERPE). • Three surgeons performed 50 EERPE for localized prostate cancer. In group A (n= 25), procedures were performed with manual control of the camera by the assistant, whereas group B (n= 25) patients were treated with the assistance of the FreeHand® robotic device. • The EERPE procedure was divided into several steps. • Total operation duration, time for each surgical step, number of camera movements, number of movement errors, number of times the lens was cleaned, blood loss and margin status were compared. • No statistically significant difference was observed in terms of patient age, preoperative prostate-specific antigen level, Gleason score, positive cores and prostate volume. • The average operation duration required for the performance of each step did not differ significantly between the two groups. • Significant differences in favour of the FreeHand® camera holder were observed in case of horizontal and zooming camera movement, camera cleaning and camera errors. • Vertical camera movements were performed significantly faster by the human assistant compared to the robotic camera holder. • The average total operation duration was similar for both groups. • Positive surgical margins were detected in one patient in each group (4% of the patients). • A comparison of the FreeHand® robotic camera holder with human camera control during EERPE showed a similar time requirement for the performance of each step of the procedure. • The robotic system provided accurate and fast movements of the camera without compromising the outcome of the procedure. © 2010 THE AUTHORS. BJU INTERNATIONAL © 2010 BJU INTERNATIONAL.

  2. Visual guidance of a pig evisceration robot using neural networks

    DEFF Research Database (Denmark)

    Christensen, S.S.; Andersen, A.W.; Jørgensen, T.M.

    1996-01-01

    The application of a RAM-based neural network to robot vision is demonstrated for the guidance of a pig evisceration robot. Tests of the combined robot-vision system have been performed at an abattoir. The vision system locates a set of feature points on a pig carcass and transmits the 3D...

  3. Equipment to Support Development of Neuronal Network Controlled Robots

    Science.gov (United States)

    2016-06-25

    Equipment to Support Development of Neuronal Network Controlled Robots With this award, our team purchased an ALA 2-channel stimulus generator, an...34 laser cutter, and a Rethink Robotics Baxter Robot . This equipment supported two ARO awards, a DARPA award and two NSF-funded projects. The views...Controlled Robots Report Title With this award, our team purchased an ALA 2-channel stimulus generator, an ALA 60-channel amplifier with pre-filter

  4. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  5. A Review on Sensor Network Issues and Robotics

    Directory of Open Access Journals (Sweden)

    Ji Hyoung Ryu

    2015-01-01

    Full Text Available The interaction of distributed robotics and wireless sensor networks has led to the creation of mobile sensor networks. There has been an increasing interest in building mobile sensor networks and they are the favored class of WSNs in which mobility plays a key role in the execution of an application. More and more researches focus on development of mobile wireless sensor networks (MWSNs due to its favorable advantages and applications. In WSNs robotics can play a crucial role, and integrating static nodes with mobile robots enhances the capabilities of both types of devices and enables new applications. In this paper we present an overview on mobile sensor networks in robotics and vice versa and robotic sensor network applications.

  6. Networked Sensor - Aided Tracking of Walking Human in Robotic Space

    Directory of Open Access Journals (Sweden)

    Taeseok Jin

    2013-01-01

    Full Text Available The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, it is necessary for a robot to carry out human tracking as one of its human-affinitive movements. In this research, a predictable robotic space is introduced in order for a robot to follow a walking human by the shortest time trajectory. The mobile robot is controlled to follow the walking human using distributed networked sensors. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the robotic space. The computer simulation and experimental results on the mobile robot's success in estimating information and following a walking human are presented.

  7. Vision-Based Cooperative Pose Estimation for Localization in Multi-Robot Systems Equipped with RGB-D Cameras

    Directory of Open Access Journals (Sweden)

    Xiaoqin Wang

    2014-12-01

    Full Text Available We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, the robots with shared field-of-views estimate their relative poses in pairwise. The system does not need the existence of a single common view shared by all robots, and it works in 3D scenes without any specific calibration pattern or landmark. The proposed scheme distributes working loads evenly in the system, hence it is scalable and the computing power of the participating robots is efficiently used. The performance and robustness were analyzed both on synthetic and experimental data in different environments over a range of system configurations with varying number of robots and poses.

  8. Formation control for a network of small-scale robots.

    Science.gov (United States)

    Kim, Yoonsoo

    2014-10-01

    In this paper, a network of small-scale robots (typically centimeter-scale robots) equipped with artificial actuators such as electric motors is considered. The purpose of this network is to have the robots keep a certain formation shape (or change to another formation shape) during maneuvers. The network has a fixed communication topology in the sense that robots have a fixed group of neighbors to communicate during maneuvers. Assuming that each robot and its actuator can be modeled as a linear system, a decentralized control law (such that each robot activates its actuator based on the information from its neighbors only) is introduced to achieve the purpose of formation keeping or change. A linear matrix inequality (LMI) for deriving the upper bound on the actuator's time constant is also presented. Simulation results are shown to demonstrate the merit of the introduced control law.

  9. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    Directory of Open Access Journals (Sweden)

    Arturo Gil

    2010-05-01

    Full Text Available In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  10. Laparoscopic cholecystectomy as solo surgery with the aid of a robotic camera holder: a case-control study.

    Science.gov (United States)

    Kalteis, Manfred; Pistrich, Renate; Schimetta, Wolfgang; Pölz, Werner

    2007-08-01

    By using robotic camera holders, a laparoscopic cholecystectomy (LC) is possible as a solo-surgeon operation. The purpose of this paper is to examine the safeness and efficiency of solo-surgeon LCs. A series of 72 solo-surgeon LCs was retrospectively compared with a control cohort (matched pairs). Efficiency and safety parameters were compared by means of equivalence tests (scope=10%). Nearly identical incision-suture times (means: 69.6 vs. 70.7 min) were recorded. An equivalence was also found in the cohorts for the total time in the operating room (means: 117.4 vs. 117.2 min). In terms of the rate of complications, the perioperative difference in hemoglobin, and the conversion rate, the robot cohort proved to be at least equal to the control cohort. The postoperative hospital stay was shorter for the robot cohort. Solo-surgeon LC with a robotic camera holder is an efficient and safe method.

  11. Robotic Camera Assistance and Its Benefit in 1033 Traditional Laparoscopic Procedures: Prospective Clinical Trial Using a Joystick-guided Camera Holder.

    Science.gov (United States)

    Holländer, Sebastian W; Klingen, Hans Joachim; Fritz, Marliese; Djalali, Peter; Birk, Dieter

    2014-11-01

    Despite advances in instruments and techniques in laparoscopic surgery, one thing remains uncomfortable: the camera assistance. The aim of this study was to investigate the benefit of a joystick-guided camera holder (SoloAssist®, Aktormed, Barbing, Germany) for laparoscopic surgery and to compare the robotic assistance to human assistance. 1033 consecutive laparoscopic procedures were performed assisted by the SoloAssist®. Failures and aborts were documented and nine surgeons were interviewed by questionnaire regarding their experiences. In 71 of 1033 procedures, robotic assistance was aborted and the procedure was continued manually, mostly because of frequent changes of position, narrow spaces, and adverse angular degrees. One case of short circuit was reported. Emergency stop was necessary in three cases due to uncontrolled movement into the abdominal cavity. Eight of nine surgeons prefer robotic to human assistance, mostly because of a steady image and self-control. The SoloAssist® robot is a reliable system for laparoscopic procedures. Emergency shutdown was necessary in only three cases. Some minor weak spots could have been identified. Most surgeons prefer robotic assistance to human assistance. We feel that the SoloAssist® makes standard laparoscopic surgery more comfortable and further development is desirable, but it cannot fully replace a human assistant.

  12. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  13. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  14. Multi-robot Coordination by using Cellular Neural Networks

    Directory of Open Access Journals (Sweden)

    A. Gacsadi

    2008-05-01

    Full Text Available Vision-based algorithms for multi-robot coordination,are presented in this paper. Cellular Neural Networks (CNNsprocessing techniques are used for real time motion planning ofthe robots. The CNN methods are considered an advantageoussolution for image processing in autonomous mobile robotsguidance.

  15. Passivity-based control and estimation in networked robotics

    CERN Document Server

    Hatanaka, Takeshi; Fujita, Masayuki; Spong, Mark W

    2015-01-01

    Highlighting the control of networked robotic systems, this book synthesizes a unified passivity-based approach to an emerging cross-disciplinary subject. Thanks to this unified approach, readers can access various state-of-the-art research fields by studying only the background foundations associated with passivity. In addition to the theoretical results and techniques,  the authors provide experimental case studies on testbeds of robotic systems  including networked haptic devices, visual robotic systems,  robotic network systems and visual sensor network systems. The text begins with an introduction to passivity and passivity-based control together with the other foundations needed in this book. The main body of the book consists of three parts. The first examines how passivity can be utilized for bilateral teleoperation and demonstrates the inherent robustness of the passivity-based controller against communication delays. The second part emphasizes passivity’s usefulness for visual feedback control ...

  16. NRES: The Network of Robotic Echelle Spectrographs

    Science.gov (United States)

    Siverd, Robert; Brown, Timothy M.; Henderson, Todd; Hygelund, John; Barnes, Stuart; Bowman, Mark; De Vera, Jon; Eastman, Jason D.; Kirby, Annie; Norbury, Martin; Smith, Cary; Taylor, Brook; Tufts, Joseph; Van Eyken, Julian C.

    2017-06-01

    Las Cumbres Observatory (LCO) is building the Network of Robotic Echelle Spectrographs (NRES), which will consist of four to six identical, optical (390 - 860 nm) high-precision spectrographs, each fiber-fed simultaneously by up to two 1-meter telescopes and a Thorium-Argon calibration source. We plan to install one at up to 6 observatory sites in the Northern and Southern hemispheres, creating a single, globally-distributed, autonomous spectrograph facility using up to ten 1-m telescopes. Simulations suggest we will achieve long-term radial velocity precision of 3 m/s in less than an hour for stars brighter than V = 11 or 12. Following a few months of on-sky evaluation at our BPL test facility, the first spectrograph unit was shipped to CTIO in late 2016 and installed in March 2017. Barring serious complications, we expect regular scheduled science observing to begin in mid-2017. Three additional units are in building or testing phases and slated for deployment in late 2017. Acting in concert, these four spectrographs will provide a new, unique facility for stellar characterization and precise radial velocities. We will briefly overview the LCO telescope network, the NRES spectrograph design, the advantages it provides, and development challenges we encountered along the way. We will further discuss real-world performance from our first unit, initial science results, and the ongoing software development effort needed to automate such a facility for a wide array of science cases.

  17. Practical Stabilization of Uncertain Nonholonomic Mobile Robots Based on Visual Servoing Model with Uncalibrated Camera Parameters

    Directory of Open Access Journals (Sweden)

    Hua Chen

    2013-01-01

    Full Text Available The practical stabilization problem is addressed for a class of uncertain nonholonomic mobile robots with uncalibrated visual parameters. Based on the visual servoing kinematic model, a new switching controller is presented in the presence of parametric uncertainties associated with the camera system. In comparison with existing methods, the new design method is directly used to control the original system without any state or input transformation, which is effective to avoid singularity. Under the proposed control law, it is rigorously proved that all the states of closed-loop system can be stabilized to a prescribed arbitrarily small neighborhood of the zero equilibrium point. Furthermore, this switching control technique can be applied to solve the practical stabilization problem of a kind of mobile robots with uncertain parameters (and angle measurement disturbance which appeared in some literatures such as Morin et al. (1998, Hespanha et al. (1999, Jiang (2000, and Hong et al. (2005. Finally, the simulation results show the effectiveness of the proposed controller design approach.

  18. NRES: The Network of Robotic Echelle Spectrographs

    Science.gov (United States)

    Siverd, Robert; Brown, Tim; Henderson, Todd; Hygelund, John; Barnes, Stuart; de Vera, Jon; Eastman, Jason; Kirby, Annie; Smith, Cary; Taylor, Brook; Tufts, Joseph; van Eyken, Julian

    2018-01-01

    Las Cumbres Observatory (LCO) is building the Network of Robotic Echelle Spectrographs (NRES), which will consist of four (up to six in the future) identical, optical (390 - 860 nm) high-precision spectrographs, each fiber-fed simultaneously by up to two 1-meter telescopes and a Thorium-Argon calibration source. We plan to install one at up to 6 observatory sites in the Northern and Southern hemispheres, creating a single, globally-distributed, autonomous spectrograph facility using up to ten 1-m telescopes. Simulations suggest we will achieve long-term radial velocity precision of 3 m/s in less than an hour for stars brighter than V = 11 or 12 once the system reaches full capability. Acting in concert, these four spectrographs will provide a new, unique facility for stellar characterization and precise radial velocities.Following a few months of on-sky evaluation at our BPL test facility, the first spectrograph unit was shipped to CTIO in late 2016 and installed in March 2017. After several more months of additional testing and commissioning, regular science operations began with this node in September 2017. The second NRES spectrograph was installed at McDonald Observatory in September 2017 and released to the network after its own brief commissioning period, extending spectroscopic capability to the Northern hemisphere. The third NRES spectrograph was installed at SAAO in November 2017 and released to our science community just before year's end. The fourth NRES unit shipped in October and is currently en route to Wise Observatory in Israel with an expected release to the science community in early 2018.We will briefly overview the LCO telescope network, the NRES spectrograph design, the advantages it provides, and development challenges we encountered along the way. We will further discuss real-world performance from our first three units, initial science results, and the ongoing software development effort needed to automate such a facility for a wide array of

  19. Multi-camera networks for motion parameter estimation of an aircraft

    Directory of Open Access Journals (Sweden)

    Banglei Guan

    2017-02-01

    Full Text Available A multi-camera network is proposed to estimate an aircraft’s motion parameters relative to the reference platform in large outdoor fields. Multiple cameras are arranged to cover the aircraft’s large-scale motion spaces by field stitching. A camera calibration method using dynamic control points created by a multirotor unmanned aerial vehicle is presented under the conditions that the field of view of the cameras is void. The relative deformation of the camera network caused by external environmental factors is measured and compensated using a combination of cameras and laser rangefinders. A series of field experiments have been carried out using a fixed-wing aircraft without artificial makers, and its accuracy is evaluated using an onboard Differential Global Positioning System. The experimental results show that the multi-camera network is precise, robust, and highly dynamic and can improve the aircraft’s landing accuracy.

  20. Dual adaptive dynamic control of mobile robots using neural networks.

    Science.gov (United States)

    Bugeja, Marvin K; Fabri, Simon G; Camilleri, Liberato

    2009-02-01

    This paper proposes two novel dual adaptive neural control schemes for the dynamic control of nonholonomic mobile robots. The two schemes are developed in discrete time, and the robot's nonlinear dynamic functions are assumed to be unknown. Gaussian radial basis function and sigmoidal multilayer perceptron neural networks are used for function approximation. In each scheme, the unknown network parameters are estimated stochastically in real time, and no preliminary offline neural network training is used. In contrast to other adaptive techniques hitherto proposed in the literature on mobile robots, the dual control laws presented in this paper do not rely on the heuristic certainty equivalence property but account for the uncertainty in the estimates. This results in a major improvement in tracking performance, despite the plant uncertainty and unmodeled dynamics. Monte Carlo simulation and statistical hypothesis testing are used to illustrate the effectiveness of the two proposed stochastic controllers as applied to the trajectory-tracking problem of a differentially driven wheeled mobile robot.

  1. An efficient neural network approach to dynamic robot motion planning.

    Science.gov (United States)

    Yang, S X; Meng, M

    2000-03-01

    In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies.

  2. Inverse kinematics problem in robotics using neural networks

    Science.gov (United States)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  3. A deep learning based fusion of RGB camera information and magnetic localization information for endoscopic capsule robots.

    Science.gov (United States)

    Turan, Mehmet; Shabbir, Jahanzaib; Araujo, Helder; Konukoglu, Ender; Sitti, Metin

    2017-01-01

    A reliable, real time localization functionality is crutial for actively controlled capsule endoscopy robots, which are an emerging, minimally invasive diagnostic and therapeutic technology for the gastrointestinal (GI) tract. In this study, we extend the success of deep learning approaches from various research fields to the problem of sensor fusion for endoscopic capsule robots. We propose a multi-sensor fusion based localization approach which combines endoscopic camera information and magnetic sensor based localization information. The results performed on real pig stomach dataset show that our method achieves sub-millimeter precision for both translational and rotational movements.

  4. SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)

    Science.gov (United States)

    Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter

    2016-08-01

    The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This

  5. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2017-11-03

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  6. Collaboration Layer for Robots in Mobile Ad-hoc Networks

    DEFF Research Database (Denmark)

    Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e

    2009-01-01

    In many applications multiple robots in Mobile Ad-hoc Networks are required to collaborate in order to solve a task. This paper shows by proof of concept that a Collaboration Layer can be modelled and designed to handle the collaborative communication, which enables robots in small to medium size...... networks to solve tasks collaboratively. In this proposal the Collaboration Layer is modelled to handle service and position discovery, group management, and synchronisation among robots, but the layer is also designed to be extendable. Based on this model of the Collaboration Layer, generic services....... A prototype of the Collaboration Layer has been developed to run in a simulated environment and tested in an evaluation scenario. In the scenario five robots solve the tasks of vacuum cleaning and entrance guarding, which involves the ability to discover potential co-workers, form groups, shift from one group...

  7. Four Degree Freedom Robot Arm with Fuzzy Neural Network Control

    Directory of Open Access Journals (Sweden)

    Şinasi Arslan

    2013-01-01

    Full Text Available In this study, the control of four degree freedom robot arm has been realized with the computed torque control method.. It is usually required that the four jointed robot arm has high precision capability and good maneuverability for using in industrial applications. Besides, high speed working and external applied loads have been acting as important roles. For those purposes, the computed torque control method has been developed in a good manner that the robot arm can track the given trajectory, which has been able to enhance the feedback control together with fuzzy neural network control. The simulation results have proved that the computed torque control with the neural network has been so successful in robot control.

  8. Object manipulation by a humanoid robot via single camera pose estimation

    OpenAIRE

    Eskimez, Şefik Emre; Eskimez, Sefik Emre

    2013-01-01

    Humanoid robots are designed to be used in daily life as assistance robots for people. They are expected to fill the jobs that require physical labor. These robots are also considered in healthcare sector. The ultimate goal in humanoid robotics is to reach a point where robots can truly communicate with people, and to be a part of labor force. Usual daily environment of a common person contains objects with different geometric and texture features. Such objects should be easily recognized, lo...

  9. Fractal gene regulatory networks for robust locomotion control of modular robots

    DEFF Research Database (Denmark)

    Zahadat, Payam; Christensen, David Johan; Schultz, Ulrik Pagh

    2010-01-01

    Designing controllers for modular robots is difficult due to the distributed and dynamic nature of the robots. In this paper fractal gene regulatory networks are evolved to control modular robots in a distributed way. Experiments with different morphologies of modular robot are performed and the ......Designing controllers for modular robots is difficult due to the distributed and dynamic nature of the robots. In this paper fractal gene regulatory networks are evolved to control modular robots in a distributed way. Experiments with different morphologies of modular robot are performed...

  10. A Proposal for Automatic Fruit Harvesting by Combining a Low Cost Stereovision Camera and a Robotic Arm

    Directory of Open Access Journals (Sweden)

    Davinia Font

    2014-06-01

    Full Text Available This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions.

  11. A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm.

    Science.gov (United States)

    Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi

    2014-06-30

    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions.

  12. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    Directory of Open Access Journals (Sweden)

    Yongcheng Li

    Full Text Available We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning. Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  13. Low Cost Wireless Network Camera Sensors for Traffic Monitoring

    Science.gov (United States)

    2012-07-01

    Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...

  14. Camera traps as sensor networks for monitoring animal communities

    OpenAIRE

    Kays, R.W.; Kranstauber, B.; Jansen, P.A.; C. Carbone; Rowcliffe, M.; Fountain, T; Tilak, S.

    2009-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a species at a location, recording their movement in the Eulerian sense. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper describes our experience ...

  15. Camera Traps as Sensor Networks for Monitoring Animal Communities

    OpenAIRE

    Kays, R.W.; Tilak, S.; Kranstauber, B.; Jansen, P.A.; Carbone, C.; Rowcliff, M.J.; Fountain, T.; Eggert, J.; He, Z.

    2011-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a broad range of species providing location – specific information on movement and behavior. Modern digital camera traps that record video present not only new analytical opportunities, but also new data management challenges. This pa...

  16. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  17. Integrated Network Architecture for Sustained Human and Robotic Exploration

    Science.gov (United States)

    Noreen, Gary; Cesarone, Robert; Deutsch, Leslie; Edwards, Charles; Soloff, Jason; Ely, Todd; Cook, Brian; Morabito, David; Hemmati, Hamid; Piazolla, Sabino; hide

    2005-01-01

    The National Aeronautics and Space Administration (NASA) Exploration Systems Enterprise is planning a series of human and robotic missions to the Earth's moon and to Mars. These missions will require communication and navigation services. This paper1 sets forth presumed requirements for such services and concepts for lunar and Mars telecommunications network architectures to satisfy the presumed requirements. The paper suggests that an inexpensive ground network would suffice for missions to the near-side of the moon. A constellation of three Lunar Telecommunications Orbiters connected to an inexpensive ground network could provide continuous redundant links to a polar lunar base and its vicinity. For human and robotic missions to Mars, a pair of areostationary satellites could provide continuous redundant links between Earth and a mid-latitude Mars base in conjunction with the Deep Space Network augmented by large arrays of 12-m antennas on Earth.

  18. Space Networking Demonstrated for Distributed Human-Robotic Planetary Exploration

    Science.gov (United States)

    Bizon, Thomas P.; Seibert, Marc A.

    2003-01-01

    Communications and networking experts from the NASA Glenn Research Center designed and implemented an innovative communications infrastructure for a simulated human-robotic planetary mission. The mission, which was executed in the Arizona desert during the first 2 weeks of September 2002, involved a diverse team of researchers from several NASA centers and academic institutions.

  19. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment

    Science.gov (United States)

    Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks. PMID:27806074

  20. Neural-Network Control Of Prosthetic And Robotic Hands

    Science.gov (United States)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  1. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  2. Sedimentological Investigations of the Martian Surface using the Mars 2001 Robotic Arm Camera and MECA Optical Microscope

    Science.gov (United States)

    Rice, J. W., Jr.; Smith, P. H.; Marshall, J. R.

    1999-01-01

    The first microscopic sedimentological studies of the Martian surface will commence with the landing of the Mars Polar Lander (MPL) December 3, 1999. The Robotic Arm Camera (RAC) has a resolution of 25 um/p which will permit detailed micromorphological analysis of surface and subsurface materials. The Robotic Ann will be able to dig up to 50 cm below the surface. The walls of the trench will also be inspected by RAC to look for evidence of stratigraphic and / or sedimentological relationships. The 2001 Mars Lander will build upon and expand the sedimentological research begun by the RAC on MPL. This will be accomplished by: (1) Macroscopic (dm to cm): Descent Imager, Pancam, RAC; (2) Microscopic (mm to um RAC, MECA Optical Microscope (Figure 2), AFM This paper will focus on investigations that can be conducted by the RAC and MECA Optical Microscope.

  3. Camera traps as sensor networks for monitoring animal communities

    NARCIS (Netherlands)

    Kays, R.W.; Kranstauber, B.; Jansen, P.A.; Carbone, C.; Rowcliffe, M.; Fountain, T.; Tilak, S.

    2009-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a species at a location, recording

  4. Camera Traps as Sensor Networks for Monitoring Animal Communities

    NARCIS (Netherlands)

    Kays, R.W.; Tilak, S.; Kranstauber, B.; Jansen, P.A.; Carbone, C.; Rowcliff, M.J.; Fountain, T.; Eggert, J.; He, Z.

    2011-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a broad range of species providing

  5. Robotic Arm Camera Image of the South Side of the Thermal and Evolved-Gas Analyzer (Door TA4

    Science.gov (United States)

    2008-01-01

    The Thermal and Evolved-Gas Analyzer (TEGA) instrument aboard NASA's Phoenix Mars Lander is shown with one set of oven doors open and dirt from a sample delivery. After the 'seventh shake' of TEGA, a portion of the dirt sample entered the oven via a screen for analysis. This image was taken by the Robotic Arm Camera on Sol 18 (June 13, 2008), or 18th Martian day of the mission. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  6. Remote Lab for Robotics Applications

    Directory of Open Access Journals (Sweden)

    Robinson Jiménez

    2018-01-01

    Full Text Available This article describes the development of a remote lab environment used to test and training sessions for robotics tasks. This environment is made up of the components and devices based on two robotic arms, a network link, Arduino card and Arduino shield for Ethernet, as well as an IP camera. The remote laboratory is implemented to perform remote control of the robotic arms with visual feedback by camera, of the robots actions, where, with a group of test users, it was possible to obtain performance ranges in tasks of telecontrol of up to 92%.

  7. Neural network output feedback control of robot formations.

    Science.gov (United States)

    Dierks, Travis; Jagannathan, Sarangapani

    2010-04-01

    In this paper, a combined kinematic/torque output feedback control law is developed for leader-follower-based formation control using backstepping to accommodate the dynamics of the robots and the formation in contrast with kinematic-based formation controllers. A neural network (NN) is introduced to approximate the dynamics of the follower and its leader using online weight tuning. Furthermore, a novel NN observer is designed to estimate the linear and angular velocities of both the follower robot and its leader. It is shown, by using the Lyapunov theory, that the errors for the entire formation are uniformly ultimately bounded while relaxing the separation principle. In addition, the stability of the formation in the presence of obstacles, is examined using Lyapunov methods, and by treating other robots in the formation as obstacles, collisions within the formation are prevented. Numerical results are provided to verify the theoretical conjectures.

  8. Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration

    Science.gov (United States)

    Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...

  9. Comparative Study between Robust Control of Robotic Manipulators by Static and Dynamic Neural Networks

    OpenAIRE

    Ghrab, Nadya; Kallel, Hichem

    2013-01-01

    A comparative study between static and dynamic neural networks for robotic systems control is considered. So, two approaches of neural robot control were selected, exposed, and compared. One uses a static neural network; the other uses a dynamic neural network. Both compensate the nonlinear modeling and uncertainties of robotic systems. The first approach is direct; it approximates the nonlinearities and uncertainties by a static neural network. The second approach is indirect; it uses a dyna...

  10. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †

    Science.gov (United States)

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-01-01

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624

  11. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    Science.gov (United States)

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  12. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images

    Directory of Open Access Journals (Sweden)

    Lingyan Ran

    2017-06-01

    Full Text Available Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN, trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  13. Over-Snow Robots for Polar Instrument Networks

    Science.gov (United States)

    Lever, J.; Ray, L.

    2006-12-01

    - bandwidth data relays for stationary instrumentation. We are proposing a five-year project to upgrade the Cool Robot and demonstrate its utility to support polar science. We will refine the design for reliable, long-duration deployment in Antarctica and Greenland, construct 5 prototypes, quantify their capabilities through field tests, and commission the network by conducting several polar-science demonstration projects. Upgraded goals include 1,500-2,000-km summertime traverses of Antarctica and Greenland, safe navigation through 0.5-m amplitude sastrugi fields, survival in blizzards, and network adaptation to research events of opportunity. We are seeking polar scientists interested in using Cool Robots on research projects and will adapt the robot to their requirements.

  14. Embodying cultured networks with a robotic drawing arm.

    Science.gov (United States)

    Bakkum, Douglas J; Chao, Zenas C; Gamblen, Phil; Ben-Ary, Guy; Shkolnik, Alec G; DeMarse, Thomas B; Potter, Steve M

    2007-01-01

    The advanced and robust computational power of the brain is shown by the complex behaviors it produces. By embodying living cultured neuronal networks with a robotic or simulated animal (animat) and situating them within an environment, we study how the basic principles of neuronal network communication can culminate into adaptive goal-directed behavior. We engineered a closed-loop biological-robotic drawing machine and explored sensory-motor mappings and training. Preliminary results suggest that real-time performance-based feedback allowed an animat to draw in desired directions. This approach may help instruct the future design of artificial neural systems and of the algorithms to interface sensory and motor prostheses with the brain.

  15. Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT

    Science.gov (United States)

    Melih Tanis, Cemal; Nadir Arslan, Ali

    2017-04-01

    Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The

  16. Occlusion handling framework for tracking in smart camera networks by per-target assistance task assignment

    Science.gov (United States)

    Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.

  17. OPTIMAL CAMERA NETWORK DESIGN FOR 3D MODELING OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    B. S. Alsadik

    2012-07-01

    Full Text Available Digital cultural heritage documentation in 3D is subject to research and practical applications nowadays. Image-based modeling is a technique to create 3D models, which starts with the basic task of designing the camera network. This task is – however – quite crucial in practical applications because it needs a thorough planning and a certain level of expertise and experience. Bearing in mind todays computational (mobile power we think that the optimal camera network should be designed in the field, and, therefore, making the preprocessing and planning dispensable. The optimal camera network is designed when certain accuracy demands are fulfilled with a reasonable effort, namely keeping the number of camera shots at a minimum. In this study, we report on the development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues. Starting from a rough point cloud derived from a video stream of object images, the initial configuration of the camera network assuming a high-resolution state-of-the-art non-metric camera is designed. To improve the image coverage and accuracy, we use a mathematical penalty method of optimization with constraints. From the experimental test, we found that, after optimization, the maximum coverage is attained beside a significant improvement of positional accuracy. Currently, we are working on a guiding system, to ensure, that the operator actually takes the desired images. Further next steps will include a reliable and detailed modeling of the object applying sophisticated dense matching techniques.

  18. Web based educational tool for neural network robot control

    Directory of Open Access Journals (Sweden)

    Jure Čas

    2007-05-01

    Full Text Available Abstract— This paper describes the application for teleoperations of the SCARA robot via the internet. The SCARA robot is used by students of mehatronics at the University of Maribor as a remote educational tool. The developed software consists of two parts i.e. the continuous neural network sliding mode controller (CNNSMC and the graphical user interface (GUI. Application is based on two well-known commercially available software packages i.e. MATLAB/Simulink and LabVIEW. Matlab/Simulink and the DSP2 Library for Simulink are used for control algorithm development, simulation and executable code generation. While this code is executing on the DSP-2 Roby controller and through the analog and digital I/O lines drives the real process, LabVIEW virtual instrument (VI, running on the PC, is used as a user front end. LabVIEW VI provides the ability for on-line parameter tuning, signal monitoring, on-line analysis and via Remote Panels technology also teleoperation. The main advantage of a CNNSMC is the exploitation of its self-learning capability. When friction or an unexpected impediment occurs for example, the user of a remote application has no information about any changed robot dynamic and thus is unable to dispatch it manually. This is not a control problem anymore because, when a CNNSMC is used, any approximation of changed robot dynamic is estimated independently of the remote’s user. Index Terms—LabVIEW; Matlab/Simulink; Neural network control; remote educational tool; robotics

  19. Real-Time Range Sensing Video Camera for Human/Robot Interfacing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  20. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)

    OpenAIRE

    Ahmed R. J. Almusawi; L. Canan Dülger; Sadettin Kapucu

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional...

  1. Behavior Emergence in Autonomous Robot Control by Means of Evolutionary Neural Networks

    Science.gov (United States)

    Neruda, Roman; Slušný, Stanislav; Vidnerová, Petra

    We study the emergence of intelligent behavior of a simple mobile robot. Robot control system is realized by mechanisms based on neural networks and evolutionary algorithms. The evolutionary algorithm is responsible for the adaptation of a neural network parameters based on the robot's performance in a simulated environment. In experiments, we demonstrate the performance of evolutionary algorithm on selected problems, namely maze exploration and discrimination of walls and cylinders. A comparison of different networks architectures is presented and discussed.

  2. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  3. Scaling-up camera traps: monitoring the planet's biodiversity with networks of remote sensors

    Science.gov (United States)

    Steenweg, Robin; Hebblewhite, Mark; Kays, Roland; Ahumada, Jorge A.; Fisher, Jason T.; Burton, Cole; Townsend, Susan E.; Carbone, Chris; Rowcliffe, J. Marcus; Whittington, Jesse; Brodie, Jedediah; Royle, Andy; Switalski, Adam; Clevenger, Anthony P.; Heim, Nicole; Rich, Lindsey N.

    2017-01-01

    Countries committed to implementing the Convention on Biological Diversity's 2011–2020 strategic plan need effective tools to monitor global trends in biodiversity. Remote cameras are a rapidly growing technology that has great potential to transform global monitoring for terrestrial biodiversity and can be an important contributor to the call for measuring Essential Biodiversity Variables. Recent advances in camera technology and methods enable researchers to estimate changes in abundance and distribution for entire communities of animals and to identify global drivers of biodiversity trends. We suggest that interconnected networks of remote cameras will soon monitor biodiversity at a global scale, help answer pressing ecological questions, and guide conservation policy. This global network will require greater collaboration among remote-camera studies and citizen scientists, including standardized metadata, shared protocols, and security measures to protect records about sensitive species. With modest investment in infrastructure, and continued innovation, synthesis, and collaboration, we envision a global network of remote cameras that not only provides real-time biodiversity data but also serves to connect people with nature.

  4. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    Science.gov (United States)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  5. A cultured human neural network operates a robotic actuator.

    Science.gov (United States)

    Pizzi, R M R; Rossetti, D; Cino, G; Marino, D; A L Vescovi; Baer, W

    2009-02-01

    The development of bio-electronic prostheses, hybrid human-electronics devices and bionic robots has been the aim of many researchers. Although neurophysiologic processes have been widely investigated and bio-electronics has developed rapidly, the dynamics of a biological neuronal network that receive sensory inputs, store and control information is not yet understood. Toward this end, we have taken an interdisciplinary approach to study the learning and response of biological neural networks to complex stimulation patterns. This paper describes the design, execution, and results of several experiments performed in order to investigate the behavior of complex interconnected structures found in biological neural networks. The experimental design consisted of biological human neurons stimulated by parallel signal patterns intended to simulate complex perceptions. The response patterns were analyzed with an innovative artificial neural network (ANN), called ITSOM (Inductive Tracing Self Organizing Map). This system allowed us to decode the complex neural responses from a mixture of different stimulations and learned memory patterns inherent in the cell colonies. In the experiment described in this work, neurons derived from human neural stem cells were connected to a robotic actuator through the ANN analyzer to demonstrate our ability to produce useful control from simulated perceptions stimulating the cells. Preliminary results showed that in vitro human neuron colonies can learn to reply selectively to different stimulation patterns and that response signals can effectively be decoded to operate a minirobot. Lastly the fascinating performance of the hybrid system is evaluated quantitatively and potential future work is discussed.

  6. Optimizing Double-Network Hydrogel for Biomedical Soft Robots.

    Science.gov (United States)

    Banerjee, Hritwick; Ren, Hongliang

    2017-09-01

    Double-network hydrogel with standardized chemical parameters demonstrates a reasonable and viable alternative to silicone in soft robotic fabrication due to its biocompatibility, comparable mechanical properties, and customizability through the alterations of key variables. The most viable hydrogel sample in our article shows tensile strain of 851% and maximum tensile strength of 0.273 MPa. The elasticity and strength range of this hydrogel can be customized according to application requirements by simple alterations in the recipe. Furthermore, we incorporated Agar/PAM hydrogel into our highly constrained soft pneumatic actuator (SPA) design and eventually produced SPAs with escalated capabilities, such as larger range of motion, higher force output, and power efficiency. Incorporating SPAs made of Agar/PAM hydrogel resulted in low viscosity, thermos-reversibility, and ultralow elasticity, which we believe can help to combine with the other functions of hydrogel, tailoring a better solution for fabricating biocompatible soft robots.

  7. Tracking Mobile Robot in Indoor Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Liping Zhang

    2014-01-01

    Full Text Available This work addresses the problem of tracking mobile robots in indoor wireless sensor networks (WSNs. Our approach is based on a localization scheme with RSSI (received signal strength indication which is used widely in WSN. The developed tracking system is designed for continuous estimation of the robot’s trajectory. A WSN, which is composed of many very simple and cheap wireless sensor nodes, is deployed at a specific region of interest. The wireless sensor nodes collect RSSI information sent by mobile robots. A range-based data fusion scheme is used to estimate the robot’s trajectory. Moreover, a Kalman filter is designed to improve tracking accuracy. Experiments are provided to assess the performance of the proposed scheme.

  8. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  9. Time-of-flight-assisted Kinect camera-based people detection for intuitive human robot cooperation in the surgical operating room.

    Science.gov (United States)

    Beyl, Tim; Nicolai, Philip; Comparetti, Mirko D; Raczkowsky, Jörg; De Momi, Elena; Wörn, Heinz

    2016-07-01

    Scene supervision is a major tool to make medical robots safer and more intuitive. The paper shows an approach to efficiently use 3D cameras within the surgical operating room to enable for safe human robot interaction and action perception. Additionally the presented approach aims to make 3D camera-based scene supervision more reliable and accurate. A camera system composed of multiple Kinect and time-of-flight cameras has been designed, implemented and calibrated. Calibration and object detection as well as people tracking methods have been designed and evaluated. The camera system shows a good registration accuracy of 0.05 m. The tracking of humans is reliable and accurate and has been evaluated in an experimental setup using operating clothing. The robot detection shows an error of around 0.04 m. The robustness and accuracy of the approach allow for an integration into modern operating room. The data output can be used directly for situation and workflow detection as well as collision avoidance.

  10. Idiotypic immune networks in mobile-robot control.

    Science.gov (United States)

    Whitbrook, Amanda M; Aickelin, Uwe; Garibaldi, Jonathan M

    2007-12-01

    Jerne's idiotypic-network theory postulates that the immune response involves interantibody stimulation and suppression, as well as matching to antigens. The theory has proved the most popular artificial immune system (AIS) model for incorporation into behavior-based robotics, but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with nonidiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic AIS network with a reinforcement-learning (RL)-based control system is described, and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic RL, a simplified hybrid AIS-RL that implements idiotypic selection independently of derived concentration levels, and a full hybrid AIS-RL scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.

  11. Three-dimensional needle-tip localization by electric field potential and camera hybridization for needle electromyography exam robotic simulator.

    Science.gov (United States)

    He, Siyu; Gomez-Tames, Jose; Yu, Wenwei

    2016-01-01

    As one of neurological tests, needle electromygraphy exam (NEE) plays an important role to evaluate the conditions of nerves and muscles. Neurology interns and novice medical staff need repetitive training to improve their skills in performing the exam. However, no training systems are able to reproduce multiple pathological conditions to simulate real needle electromyogram exam. For the development of a robotic simulator, three components need to be realized: physical modeling of upper limb morphological features, position-dependent electromyogram generation, and needle localization; the latter is the focus of this study. Our idea is to couple two types of sensing mechanism in order to acquire the needle-tip position with high accuracy. One is to segment the needle from camera images and calculate its insertion point on the skin surface by a top-hat transform algorithm. The other is voltage-based depth measurement, in which a conductive tissue-like phantom was used to realize both needle-tip localization and physical sense of needle insertion. For that, a pair of electrodes was designed to generate a near-linear voltage distribution along the depth direction of the tissue-like phantom. The accuracy of the needle-tip position was investigated by the electric field potential and camera hybridization. The results showed that the needle tip could be detected with an accuracy of 1.05±0.57 mm.

  12. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  13. Neural network control of mobile robot formations using RISE feedback.

    Science.gov (United States)

    Dierks, Travis; Jagannathan, S

    2009-04-01

    In this paper, an asymptotically stable (AS) combined kinematic/torque control law is developed for leader-follower-based formation control using backstepping in order to accommodate the complete dynamics of the robots and the formation, and a neural network (NN) is introduced along with robust integral of the sign of the error feedback to approximate the dynamics of the follower as well as its leader using online weight tuning. It is shown using Lyapunov theory that the errors for the entire formation are AS and that the NN weights are bounded as opposed to uniformly ultimately bounded stability which is typical with most NN controllers. Additionally, the stability of the formation in the presence of obstacles is examined using Lyapunov methods, and by treating other robots in the formation as obstacles, collisions within the formation do not occur. The asymptotic stability of the follower robots as well as the entire formation during an obstacle avoidance maneuver is demonstrated using Lyapunov methods, and numerical results are provided to verify the theoretical conjectures.

  14. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  15. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  16. A magnetic levitation robotic camera for minimally invasive surgery: Useful for NOTES?

    Science.gov (United States)

    Di Lorenzo, Nicola; Cenci, Livia; Simi, Massimiliano; Arcudi, Claudio; Tognoni, Valeria; Gaspari, Achille Lucio; Valdastri, Pietro

    2017-06-01

    Minimally invasive surgery (MIS) is rising in popularity generating a revolution in operative medicine during the past few decades. Although laparoscopic techniques have not significantly changed in the last 10 years, several advances have been made in visualization devices and instrumentation. Our team, composed of surgeons and biomedical engineers, developed a magnetic levitation camera (MLC) with a magnetic internal mechanism dedicated to MIS. Three animal trials were performed. Porcine acute model has been chosen after animal ethical committee approval, and laparoscopic cholecystectomy, nephrectomy and hernioplastic repair have been performed. MLC permits to complete efficiently several two-port laparoscopy surgeries reducing patients' invasiveness and at the same time saving surgeon's dexterity. We strongly believe that insertable and softly tethered devices like MLS camera will be an integral part of future surgical systems, thus improving procedures efficiency, minimizing invasiveness and enhancing surgeon dexterity and versatility of visions angles.

  17. Navigation of autonomous mobile robot using different activation functions of wavelet neural network

    Directory of Open Access Journals (Sweden)

    Panigrahi Pratap Kumar

    2015-03-01

    Full Text Available An autonomous mobile robot is a robot which can move and act autonomously without the help of human assistance. Navigation problem of mobile robot in unknown environment is an interesting research area. This is a problem of deducing a path for the robot from its initial position to a given goal position without collision with the obstacles. Different methods such as fuzzy logic, neural networks etc. are used to find collision free path for mobile robot. This paper examines behavior of path planning of mobile robot using three activation functions of wavelet neural network i.e. Mexican Hat, Gaussian and Morlet wavelet functions by MATLAB. The simulation result shows that WNN has faster learning speed with respect to traditional artificial neural network.

  18. ISOLDE target zone HRS robot, Camera A+B Part2

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE HRS robot picking up a target from the exchange point seen from different angles. Posing a target onto a shelf position behind the lead shielding doors and picking it up again bringing it back to the exchange point. Close up picking up a target from the exchange point. Close up posing a target onto a shelf position. Picking up a target from a shelf position seen from the target front end towards the zone entrance and taking it to the exchange point and vice versa. Target handling at the exchange point position from a different angle.

  19. ISOLDE target zone HRS robot, Camera A+B Part1

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE HRS robot picking up a target from the exchange point seen from different angles. Posing a target onto a shelf position behind the lead shielding doors and picking it up again bringing it back to the exchange point. Close up picking up a target from the exchange point. Close up posing a target onto a shelf position. Picking up a target from a shelf position seen from the target front end towards the zone entrance and taking it to the exchange point and vice versa. Target handling at the exchange point position from a different angle.

  20. ISOLDE target zone HRS robot, Camera A+B Part2 HD

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE HRS robot picking up a target from the exchange point seen from different angles. Posing a target onto a shelf position behind the lead shielding doors and picking it up again bringing it back to the exchange point. Close up picking up a target from the exchange point. Close up posing a target onto a shelf position. Picking up a target from a shelf position seen from the target front end towards the zone entrance and taking it to the exchange point and vice versa. Target handling at the exchange point position from a different angle.

  1. ISOLDE target zone HRS robot, Camera A+B Part1 HD

    CERN Multimedia

    2016-01-01

    Sequences of the ISOLDE HRS robot picking up a target from the exchange point seen from different angles. Posing a target onto a shelf position behind the lead shielding doors and picking it up again bringing it back to the exchange point. Close up picking up a target from the exchange point. Close up posing a target onto a shelf position. Picking up a target from a shelf position seen from the target front end towards the zone entrance and taking it to the exchange point and vice versa. Target handling at the exchange point position from a different angle.

  2. Kinematic Analysis of 3-DOF Planer Robot Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Jolly Atit Shah

    2012-07-01

    Full Text Available Automatic control of the robotic manipulator involves study of kinematics and dynamics as a major issue. This paper involves the forward and inverse kinematics of 3-DOF robotic manipulator with revolute joints. In this study the Denavit- Hartenberg (D-H model is used to model robot links and joints. Also forward and inverse kinematics solution has been achieved using Artificial Neural Networks for 3-DOF robotic manipulator. It shows that by using artificial neural network the solution we get is faster, acceptable and has zero error.

  3. Patent Network Analysis and Quadratic Assignment Procedures to Identify the Convergence of Robot Technologies.

    Directory of Open Access Journals (Sweden)

    Woo Jin Lee

    Full Text Available Because of the remarkable developments in robotics in recent years, technological convergence has been active in this area. We focused on finding patterns of convergence within robot technology using network analysis of patents in both the USPTO and KIPO. To identify the variables that affect convergence, we used quadratic assignment procedures (QAP. From our analysis, we observed the patent network ecology related to convergence and found technologies that have great potential to converge with other robotics technologies. The results of our study are expected to contribute to setting up convergence based R&D policies for robotics, which can lead new innovation.

  4. Robotics.

    Science.gov (United States)

    Waddell, Steve; Doty, Keith L.

    1999-01-01

    "Why Teach Robotics?" (Waddell) suggests that the United States lags behind Europe and Japan in use of robotics in industry and teaching. "Creating a Course in Mobile Robotics" (Doty) outlines course elements of the Intelligent Machines Design Lab. (SK)

  5. PSD Camera Based Position and Posture Control of Redundant Robot Considering Contact Motion

    Science.gov (United States)

    Oda, Naoki; Kotani, Kentaro

    The paper describes a position and posture controller design based on the absolute position by external PSD vision sensor for redundant robot manipulator. The redundancy enables a potential capability to avoid obstacle while continuing given end-effector jobs under contact with middle link of manipulator. Under contact motion, the deformation due to joint torsion obtained by comparing internal and external position sensor, is actively suppressed by internal/external position hybrid controller. The selection matrix of hybrid loop is given by the function of the deformation. And the detected deformation is also utilized in the compliant motion controller for passive obstacle avoidance. The validity of the proposed method is verified by several experimental results of 3link planar redundant manipulator.

  6. Networked web-cameras monitor congruent seasonal development of birches with phenological field observations

    Science.gov (United States)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali

    2017-04-01

    Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for

  7. RoboSmith: Wireless Networked Architecture for Multiagent Robotic System

    OpenAIRE

    Florin Moldoveanu; Doru Ursutiu; Dan Floroian; Laura Floroian

    2010-01-01

    In this paper is presented an architecture for a flexible mini robot for a multiagent robotic system. In a multiagent system the value of an individual agent is negligible since the goal of the system is essential. Thus, the agents (robots) need to be small, low cost and cooperative. RoboSmith are designed based on these conditions. The proposed architecture divide a robot into functional modules such as locomotion, control, sensors, communication, and actuation. Any mobile robot can be const...

  8. Three-dimensional needle-tip localization by electric field potential and camera hybridization for needle electromyography exam robotic simulator

    Directory of Open Access Journals (Sweden)

    He SY

    2016-06-01

    Full Text Available Siyu He,1 Jose Gomez-Tames,1 Wenwei Yu1,2 1Medical System Engineering Department, Graduate School of Engineering, 2Center for Frontier Medical Engineering, Chiba University, Chiba, Japan Abstract: As one of neurological tests, needle electromygraphy exam (NEE plays an important role to evaluate the conditions of nerves and muscles. Neurology interns and novice medical staff need repetitive training to improve their skills in performing the exam. However, no training systems are able to reproduce multiple pathological conditions to simulate real needle electromyogram exam. For the development of a robotic simulator, three components need to be realized: physical modeling of upper limb morphological features, position-dependent electromyogram generation, and needle localization; the latter is the focus of this study. Our idea is to couple two types of sensing mechanism in order to acquire the needle-tip position with high accuracy. One is to segment the needle from camera images and calculate its insertion point on the skin surface by a top-hat transform algorithm. The other is voltage-based depth measurement, in which a conductive tissue-like phantom was used to realize both needle-tip localization and physical sense of needle insertion. For that, a pair of electrodes was designed to generate a near-linear voltage distribution along the depth direction of the tissue-like phantom. The accuracy of the needle-tip position was investigated by the electric field potential and camera hybridization. The results showed that the needle tip could be detected with an accuracy of 1.05±0.57 mm. Keywords: needle-tip localization, needle EMG exam, top-hat transform, tissue-like phantom, voltage distribution simulation

  9. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Jong Hyun Kim

    2017-05-01

    Full Text Available Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1 and two open databases (Korea advanced institute of science and technology (KAIST and computer vision center (CVC databases, as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  10. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    Science.gov (United States)

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  11. Report on NSF/ARO/ONR Workshop on Distributed Camera Networks: Research Challenges and Future Directions

    Science.gov (United States)

    Bhanu, Bir; Roy Chowdhury, Amit

    Large-scale video networks are becoming increasingly important for a wide range of critical applications. The development of automated techniques for aggregating and interpreting information from multiple video streams in large-scale networks in real-life scenarios is very challenging. Research in video sensor networks is highly interdisciplinary and requires expertise from a variety of fields. The goal of this effort was to organize a two-day nationally recognized workshop in the domain of camera networks that brings together leading researchers from academia, industry and the government. The workshop was held at the University of California at Riverside on May 11-12, 2009. The workshop was attended by 75 participants. The workshop was sponsored by the US National Science Foundation, US Army Research Office and US Office of Naval Research. The workshop addressed critical interdisciplinary challenges at the intersection of large-scale video camera networks and distributed sensing, processing, communication and control; distributed video understanding; embedded real-time systems; graphics and simulation; and education. The recommendations of the workshop are summarized in the following order of topics: Video Processing and Video Understanding

  12. Addressing the Movement of a Freescale Robotic Car Using Neural Network

    Science.gov (United States)

    Horváth, Dušan; Cuninka, Peter

    2016-12-01

    This article deals with the management of a Freescale small robotic car along the predefined guide line. Controlling of the direction of movement of the robot is performed by neural networks, and scales (memory) of neurons are calculated by Hebbian learning from the truth tables as learning with a teacher. Reflexive infrared sensors serves as inputs. The results are experiments, which are used to compare two methods of mobile robot control - tracking lines.

  13. Addressing the Movement of a Freescale Robotic Car Using Neural Network

    Directory of Open Access Journals (Sweden)

    Horváth Dušan

    2016-12-01

    Full Text Available This article deals with the management of a Freescale small robotic car along the predefined guide line. Controlling of the direction of movement of the robot is performed by neural networks, and scales (memory of neurons are calculated by Hebbian learning from the truth tables as learning with a teacher. Reflexive infrared sensors serves as inputs. The results are experiments, which are used to compare two methods of mobile robot control - tracking lines.

  14. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks.

    Science.gov (United States)

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-03-20

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods.

  15. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks

    Science.gov (United States)

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-01-01

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods. PMID:28335537

  16. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  17. Neural Network Based Reactive Navigation for Mobile Robot in Dynamic Environment

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, S.; Ripel, T.

    2013-01-01

    Roč. 198, č. 2013 (2013), s. 108-113 ISSN 1012-0394 Institutional research plan: CEZ:AV0Z20760514 Institutional support: RVO:61388998 Keywords : mobile robot * reactive navigation * artificial neural networks Subject RIV: JD - Computer Applications, Robot ics

  18. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  19. Robotic platform for traveling on vertical piping network

    Science.gov (United States)

    Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel; Marzolf, Athneal D

    2015-02-03

    This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.

  20. Camera characterization using back-propagation artificial neutral network based on Munsell system

    Science.gov (United States)

    Liu, Ye; Yu, Hongfei; Shi, Junsheng

    2008-02-01

    The camera output RGB signals do not directly corresponded to the tristimulus values based on the CIE standard colorimetric observer, i.e., it is a device-independent color space. For achieving accurate color information, we need to do color characterization, which can be used to derive a transformation between camera RGB values and CIE XYZ values. In this paper we set up a Back-Propagation (BP) artificial neutral network to realize the mapping from camera RGB to CIE XYZ. We used the Munsell Book of Color with total number 1267 as color samples. Each patch of the Munsell Book of Color was recorded by camera, and the RGB values could be obtained. The Munsell Book of Color were taken in a light booth and the surround was kept dark. The viewing/illuminating geometry was 0/45 using D 65 illuminate. The lighting illuminating the reference target needs to be as uniform as possible. The BP network was a 5-layer one and (3-10-10-10-3), which was selected through our experiments. 1000 training samples were selected randomly from the 1267 samples, and the rest 267 samples were as the testing samples. Experimental results show that the mean color difference between the reproduced colors and target colors is 0.5 CIELAB color-difference unit, which was smaller than the biggest acceptable color difference 2 CIELAB color-difference unit. The results satisfy some applications for the more accurate color measurements, such as medical diagnostics, cosmetics production, the color reappearance of different media, etc.

  1. Fractal gene regulatory networks for robust locomotion control of modular robots

    DEFF Research Database (Denmark)

    Zahadat, Payam; Christensen, David Johan; Schultz, Ulrik Pagh

    2010-01-01

    Designing controllers for modular robots is difficult due to the distributed and dynamic nature of the robots. In this paper fractal gene regulatory networks are evolved to control modular robots in a distributed way. Experiments with different morphologies of modular robot are performed...... and the results show good performance compared to previous results achieved using learning methods. Furthermore, some experiments are performed to investigate evolvability of the achieved solutions in the case of module failure and it is shown that the system is capable of come up with new effective solutions....

  2. Development of a High Speed Camera Network to Monitor and Study Lightning (Project RAMMER)

    Science.gov (United States)

    Saraiva, A. V.; Pinto, O.; Santos, H. H.; Saba, M. M.

    2010-12-01

    This work proposes the development and applications of a network of high speed cameras for observation and study of lightning flashes. Four high-speed cameras are being acquired to be part of the RAMMER network. They are capable to record high resolution videos up to 1632 x 1200 pixels at 1000 frames per second. A robust system is being assembled to ensure the safe operation of the cameras in adverse weather conditions and enable the recording of a large number of lightning flashes per storm, larger than the values reported to date. As the amount of physical memory to record only 1 second of data is something like 3 - 4 GBytes, there is no way to make long recordings of thunderstorms, so a triggering system was conceived to address this problem and do the recordings of 2 seconds of data automatically for each lightning flash. The triggering system is an optical/electromagnetic system that is being tested since September/2010 and the whole system is under testing yet. This lightning information from the video recordings will be correlated with data from the sensors of the Brazilian Lightning Detection Network (BrasilDAT), from a network of electric field fast antennas, slow electric field antennas and Field-Mills, as well as with data from the LMA (Lightning Mapping Array) to be installed in 2011 in the cities of Sao Paulo and Sao Jose dos Campos. The following objectives are envisaged: a) make the first three-dimensional reconstructions of the lightning channel with high speed cameras and verify its dependence on the physical conditions associated with each storm; b) to observe almost all CG lightning flashes of a single storm cloud in order to compare the physical characteristics of the CG lightning flashes for different storms and their dependence on physical conditions associated with each storm; c) evaluate the performance of the new sensors of BrasilDAT network in different localities and simultaneously. The schematics of the sensors will be shown here, with

  3. Sulfates, Clouds and Radiation Brazil (SCAR-B) AERONET (AErosol RObotic NETwork) Data

    Data.gov (United States)

    National Aeronautics and Space Administration — SCAR_B_AERONET data are Smoke, Clouds and Radiation Brazil (SCARB) Aerosol Robotic Network (AERONET) data for aerosol characterization.Smoke/Sulfates, Clouds and...

  4. Hand-Eye Calibration and Inverse Kinematics of Robot Arm using Neural Network

    DEFF Research Database (Denmark)

    Wu, Haiyan; Tizzano, Walter; Andersen, Thomas Timm

    2013-01-01

    tasks. This paper describes the theory and implementation of neural networks for hand-eye calibration and inverse kinematics of a six degrees of freedom robot arm equipped with a stereo vision system. The feedforward neural network and the network training with error propagation algorithm are applied...

  5. Neural Network Observer-Based Finite-Time Formation Control of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Caihong Zhang

    2014-01-01

    Full Text Available This paper addresses the leader-following formation problem of nonholonomic mobile robots. In the formation, only the pose (i.e., the position and direction angle of the leader robot can be obtained by the follower. First, the leader-following formation is transformed into special trajectory tracking. And then, a neural network (NN finite-time observer of the follower robot is designed to estimate the dynamics of the leader robot. Finally, finite-time formation control laws are developed for the follower robot to track the leader robot in the desired separation and bearing in finite time. The effectiveness of the proposed NN finite-time observer and the formation control laws are illustrated by both qualitative analysis and simulation results.

  6. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    Science.gov (United States)

    Sanders, Adam M. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor); Strawser, Philip A. (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  7. Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Mo, Se Hyun [Amotech, Seoul (Korea, Republic of); Jeon, Young Pil [Samsung Electronics Co., Ltd. Suwon (Korea, Republic of); Park, Jong Ho [Seonam Univ., Namwon (Korea, Republic of); Chong, Kil To [Chon-buk Nat' 1 Univ., Junju (Korea, Republic of)

    2017-07-15

    With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

  8. Decentralized Control of Unmanned Aerial Robots for Wireless Airborne Communication Networks

    Directory of Open Access Journals (Sweden)

    Deok-Jin Lee

    2010-09-01

    Full Text Available This paper presents a cooperative control strategy for a team of aerial robotic vehicles to establish wireless airborne communication networks between distributed heterogeneous vehicles. Each aerial robot serves as a flying mobile sensor performing a reconfigurable communication relay node which enabls communication networks with static or slow-moving nodes on gorund or ocean. For distributed optimal deployment of the aerial vehicles for communication networks, an adaptive hill-climbing type decentralized control algorithm is developed to seek out local extremum for optimal localization of the vehicles. The sensor networks estabilished by the decentralized cooperative control approach can adopt its configuraiton in response to signal strength as the function of the relative distance between the autonomous aerial robots and distributed sensor nodes in the sensed environment. Simulation studies are conducted to evaluate the effectiveness of the proposed decentralized cooperative control technique for robust communication networks.

  9. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  10. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)

    Science.gov (United States)

    Dülger, L. Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles. PMID:27610129

  11. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242).

    Science.gov (United States)

    Almusawi, Ahmed R J; Dülger, L Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.

  12. Simply Coded Evolutionary Artificial Neural Networks on a Mobile Robot Control Problem

    Science.gov (United States)

    Katada, Yoshiaki; Hidaka, Takuya

    One of the advantages of evolutionary robotics over other approaches in embodied cognitive science would be its parallel population search. Due to the population search, it takes a long time to evaluate all robot in a real environment. Thus, such techniques as to shorten the time are required for real robots to evolve in a real environment. This paper proposes to use simply coded evolutionary artificial neural networks for mobile robot control to make genetic search space as small as possible and investigates the performance of them using simulated and real robots. Two types of genetic algorithm (GA) are employed, one is the standard GA and the other is an extended GA, to achieve higher final fitnesses. The results suggest the benefits of the proposed method.

  13. Solution for Ill-Posed Inverse Kinematics of Robot Arm by Network Inversion

    Directory of Open Access Journals (Sweden)

    Takehiko Ogawa

    2010-01-01

    Full Text Available In the context of controlling a robot arm with multiple joints, the method of estimating the joint angles from the given end-effector coordinates is called inverse kinematics, which is a type of inverse problems. Network inversion has been proposed as a method for solving inverse problems by using a multilayer neural network. In this paper, network inversion is introduced as a method to solve the inverse kinematics problem of a robot arm with multiple joints, where the joint angles are estimated from the given end-effector coordinates. In general, inverse problems are affected by ill-posedness, which implies that the existence, uniqueness, and stability of their solutions are not guaranteed. In this paper, we show the effectiveness of applying network inversion with regularization, by which ill-posedness can be reduced, to the ill-posed inverse kinematics of an actual robot arm with multiple joints.

  14. Under-Actuated Robot Manipulator Positioning Control Using Artificial Neural Network Inversion Technique

    Directory of Open Access Journals (Sweden)

    Ali T. Hasan

    2012-01-01

    Full Text Available This paper is devoted to solve the positioning control problem of underactuated robot manipulator. Artificial Neural Networks Inversion technique was used where a network represents the forward dynamics of the system trained to learn the position of the passive joint over the working space of a 2R underactuated robot. The obtained weights from the learning process were fixed, and the network was inverted to represent the inverse dynamics of the system and then used in the estimation phase to estimate the position of the passive joint for a new set of data the network was not previously trained for. Data used in this research are recorded experimentally from sensors fixed on the robot joints in order to overcome whichever uncertainties presence in the real world such as ill-defined linkage parameters, links flexibility, and backlashes in gear trains. Results were verified experimentally to show the success of the proposed control strategy.

  15. FRAMEWORK FOR AD HOC NETWORK COMMUNICATION IN MULTI-ROBOT SYSTEMS

    Directory of Open Access Journals (Sweden)

    Khilda Slyusar

    2016-11-01

    Full Text Available Assume a team of mobile robots operating in environments where no communication infrastructure like routers or access points is available. The robots have to create a mobile ad hoc network, in that case, it provides communication on peer-to-peer basis. The paper gives an overview of existing solutions how to route messages in such ad hoc networks between robots that are not directly connected and introduces a design of a software framework for realization of such communication. Feasibility of the proposed framework is shown on the example of distributed multi-robot exploration of an a priori unknown environment. Testing of developed functionality in an exploration scenario is based on results of several experiments with various input conditions of the exploration process and various sizes of a team and is described herein.

  16. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  17. Bayesian fusion of ceiling mounted camera and laser range finder on a mobile robot for people detection and localization

    NARCIS (Netherlands)

    Hu, N.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    Robust people detection and localization is a prerequisite for many applications where service robots interact with humans. Future robots will not be stand-alone any more but will operate in smart environments that are equipped with sensor systems for context awareness and activity recognition. This

  18. A study of Time-varying Cost Parameter Estimation Methods in Traffic Networks for Mobile Robots

    OpenAIRE

    Das, Pragna; Xirgo, Lluís Ribas

    2015-01-01

    Industrial robust controlling systems built using automated guided vehicles (AGVs) requires planning which depends on cost parameters like time and energy of the mobile robots functioning in the system. This work addresses the problem of on-line traversal time identification and estimation for proper mobility of mobile robots on systems' traffic networks. Several filtering and estimation methods have been investigated with respect to proper identification of traversal time of arcs of systems'...

  19. Adaptive robotic control driven by a versatile spiking cerebellar network.

    Science.gov (United States)

    Casellato, Claudia; Antonietti, Alberto; Garrido, Jesus A; Carrillo, Richard R; Luque, Niceto R; Ros, Eduardo; Pedrocchi, Alessandra; D'Angelo, Egidio

    2014-01-01

    The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.

  20. Adaptive robotic control driven by a versatile spiking cerebellar network.

    Directory of Open Access Journals (Sweden)

    Claudia Casellato

    Full Text Available The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning, a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.

  1. Visual odometry from omnidirectional camera

    OpenAIRE

    Jiří DIVIŠ

    2012-01-01

    We present a system that estimates the motion of a robot relying solely on images from onboard omnidirectional camera (visual odometry). Compared to other visual odometry hardware, ours is unusual in utilizing high resolution, low frame-rate (1 to 3 Hz) omnidirectional camera mounted on a robot that is propelled using continuous tracks. We focus on high precision estimates in scenes, where objects are far away from the camera. This is achieved by utilizing omnidirectional camera that is able ...

  2. Robotics

    Indian Academy of Sciences (India)

    explaining how the robot functioning is controlled. A brief description of the measurements involved is also discussed. Introduction. Basically, the developments in two other related subjects, in- strumentation and control engineering played a major role in aiding the rapid development of the field of robotics. By instru-.

  3. Training a Network of Electronic Neurons for Control of a Mobile Robot

    Science.gov (United States)

    Vromen, T. G. M.; Steur, E.; Nijmeijer, H.

    An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.

  4. Research on the image of sweeping robot based on the Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Song Chang

    2017-01-01

    Full Text Available Based on the theory of Artificial Neural Network and Kansei Engineering, the image of sweeping robots are formed using the content analysis method, and propose four kinds of sweeping robot as the experimental samples, which have a strong influence on the market. The image questionnaires are compiled by the semantic differences methods. 200 office workers, half men and half women, are chose as the survey respondents. And use SPSS statistical software for data analysis. Afterwards, the BP Artificial Neural Network model is established by Matlab based on the questionnaire results, and the optimized design scheme with image feature combination for sweeping robot products is generated on the basis of BP Artificial Neural Network model. This study construct the emotional demands on the image level, and carry out experiments and statistical analysis, which lays a solid foundation for the study of product image in theory and approach.

  5. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242

    Directory of Open Access Journals (Sweden)

    Ahmed R. J. Almusawi

    2016-01-01

    Full Text Available This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot’s joint angles.

  6. Sensor-coupled fractal gene regulatory networks for locomotion control of a modular snake robot

    DEFF Research Database (Denmark)

    Zahadat, Payam; Christensen, David Johan; Katebi, Serajeddin

    2013-01-01

    In this paper we study fractal gene regulatory network (FGRN) controllers based on sensory information. The FGRN controllers are evolved to control a snake robot consisting of seven simulated ATRON modules. Each module contains three tilt sensors which represent the direction of gravity in the co......In this paper we study fractal gene regulatory network (FGRN) controllers based on sensory information. The FGRN controllers are evolved to control a snake robot consisting of seven simulated ATRON modules. Each module contains three tilt sensors which represent the direction of gravity...

  7. Dynamic Mobile RobotNavigation Using Potential Field Based Immune Network

    Directory of Open Access Journals (Sweden)

    Guan-Chun Luh

    2007-04-01

    Full Text Available This paper proposes a potential filed immune network (PFIN for dynamic navigation of mobile robots in an unknown environment with moving obstacles and fixed/moving targets. The Velocity Obstacle method is utilized to determine imminent obstacle collision of a robot moving in the time-varying environment. The response of the overall immune network is derived by the aid of fuzzy system. Simulation results are presented to verify the effectiveness of the proposed methodology in unknown environments with single and multiple moving obstacles

  8. Leveraging Large-Scale Semantic Networks for Adaptive Robot Task Learning and Execution.

    Science.gov (United States)

    Boteanu, Adrian; St Clair, Aaron; Mohseni-Kabir, Anahita; Saldanha, Carl; Chernova, Sonia

    2016-12-01

    This work seeks to leverage semantic networks containing millions of entries encoding assertions of commonsense knowledge to enable improvements in robot task execution and learning. The specific application we explore in this project is object substitution in the context of task adaptation. Humans easily adapt their plans to compensate for missing items in day-to-day tasks, substituting a wrap for bread when making a sandwich, or stirring pasta with a fork when out of spoons. Robot plan execution, however, is far less robust, with missing objects typically leading to failure if the robot is not aware of alternatives. In this article, we contribute a context-aware algorithm that leverages the linguistic information embedded in the task description to identify candidate substitution objects without reliance on explicit object affordance information. Specifically, we show that the task context provided by the task labels within the action structure of a task plan can be leveraged to disambiguate information within a noisy large-scale semantic network containing hundreds of potential object candidates to identify successful object substitutions with high accuracy. We present two extensive evaluations of our work on both abstract and real-world robot tasks, showing that the substitutions made by our system are valid, accepted by users, and lead to a statistically significant reduction in robot learning time. In addition, we report the outcomes of testing our approach with a large number of crowd workers interacting with a robot in real time.

  9. Self-organization of spiking neural network that generates autonomous behavior in a real mobile robot.

    Science.gov (United States)

    Alnajjar, Fady; Murase, Kazuyuki

    2006-08-01

    In this paper, we propose self-organization algorithm of spiking neural network (SNN) applicable to autonomous robot for generation of adoptive and goal-directed behavior. First, we formulated a SNN model whose inputs and outputs were analog and the hidden unites are interconnected each other. Next, we implemented it into a miniature mobile robot Khepera. In order to see whether or not a solution(s) for the given task(s) exists with the SNN, the robot was evolved with the genetic algorithm in the environment. The robot acquired the obstacle avoidance and navigation task successfully, exhibiting the presence of the solution. After that, a self-organization algorithm based on a use-dependent synaptic potentiation and depotentiation at synapses of input layer to hidden layer and of hidden layer to output layer was formulated and implemented into the robot. In the environment, the robot incrementally organized the network and the given tasks were successfully performed. The time needed to acquire the desired adoptive and goal-directed behavior using the proposed self-organization method was much less than that with the genetic evolution, approximately one fifth.

  10. A neural network-based exploratory learning and motor planning system for co-robots

    Directory of Open Access Journals (Sweden)

    Byron V Galbraith

    2015-07-01

    Full Text Available Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or learning by doing, an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  11. A neural network-based exploratory learning and motor planning system for co-robots.

    Science.gov (United States)

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  12. Learning from adaptive neural network output feedback control of a unicycle-type mobile robot.

    Science.gov (United States)

    Zeng, Wei; Wang, Qinghui; Liu, Fenglin; Wang, Ying

    2016-03-01

    This paper studies learning from adaptive neural network (NN) output feedback control of nonholonomic unicycle-type mobile robots. The major difficulties are caused by the unknown robot system dynamics and the unmeasurable states. To overcome these difficulties, a new adaptive control scheme is proposed including designing a new adaptive NN output feedback controller and two high-gain observers. It is shown that the stability of the closed-loop robot system and the convergence of tracking errors are guaranteed. The unknown robot system dynamics can be approximated by radial basis function NNs. When repeating same or similar control tasks, the learned knowledge can be recalled and reused to achieve guaranteed stability and better control performance, thereby avoiding the tremendous repeated training process of NNs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Fused Smart Sensor Network for Multi-Axis Forward Kinematics Estimation in Industrial Robots

    Directory of Open Access Journals (Sweden)

    Rene de Jesus Romero-Troncoso

    2011-04-01

    Full Text Available Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of high-computational load, which avoids the online measurement of the joint’s angular position and the online forward kinematics estimation. The contribution of this work is to propose a fused smart sensor network to estimate the forward kinematics of an industrial robot. The developed smart processor uses Kalman filters to filter and to fuse the information of the sensor network. Two primary sensors are used: an optical encoder, and a 3-axis accelerometer. In order to obtain the position and orientation of each joint online a field-programmable gate array (FPGA is used in the hardware implementation taking advantage of the parallel computation capabilities and reconfigurability of this device. With the aim of evaluating the smart sensor network performance, three real-operation-oriented paths are executed and monitored in a 6-degree of freedom robot.

  14. Real-time robot path planning based on a modified pulse-coupled neural network model.

    Science.gov (United States)

    Qu, Hong; Yang, Simon X; Willms, Allan R; Yi, Zhang

    2009-11-01

    This paper presents a modified pulse-coupled neural network (MPCNN) model for real-time collision-free path planning of mobile robots in nonstationary environments. The proposed neural network for robots is topologically organized with only local lateral connections among neurons. It works in dynamic environments and requires no prior knowledge of target or barrier movements. The target neuron fires first, and then the firing event spreads out, through the lateral connections among the neurons, like the propagation of a wave. Obstacles have no connections to their neighbors. Each neuron records its parent, that is, the neighbor that caused it to fire. The real-time optimal path is then the sequence of parents from the robot to the target. In a static case where the barriers and targets are stationary, this paper proves that the generated wave in the network spreads outward with travel times proportional to the linking strength among neurons. Thus, the generated path is always the global shortest path from the robot to the target. In addition, each neuron in the proposed model can propagate a firing event to its neighboring neuron without any comparing computations. The proposed model is applied to generate collision-free paths for a mobile robot to solve a maze-type problem, to circumvent concave U-shaped obstacles, and to track a moving target in an environment with varying obstacles. The effectiveness and efficiency of the proposed approach is demonstrated through simulation and comparison studies.

  15. An Artificial Neural Network Based Robot Controller that Uses Rat’s Brain Signals

    Directory of Open Access Journals (Sweden)

    Marsel Mano

    2013-04-01

    Full Text Available Brain machine interface (BMI has been proposed as a novel technique to control prosthetic devices aimed at restoring motor functions in paralyzed patients. In this paper, we propose a neural network based controller that maps rat’s brain signals and transforms them into robot movement. First, the rat is trained to move the robot by pressing the right and left lever in order to get food. Next, we collect brain signals with four implanted electrodes, two in the motor cortex and two in the somatosensory cortex area. The collected data are used to train and evaluate different artificial neural controllers. Trained neural controllers are employed online to map brain signals and transform them into robot motion. Offline and online classification results of rat’s brain signals show that the Radial Basis Function Neural Networks (RBFNN outperforms other neural networks. In addition, online robot control results show that even with a limited number of electrodes, the robot motion generated by RBFNN matched the motion generated by the left and right lever position.

  16. Fused smart sensor network for multi-axis forward kinematics estimation in industrial robots.

    Science.gov (United States)

    Rodriguez-Donate, Carlos; Osornio-Rios, Roque Alfredo; Rivera-Guillen, Jesus Rooney; Romero-Troncoso, Rene de Jesus

    2011-01-01

    Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of high-computational load, which avoids the online measurement of the joint's angular position and the online forward kinematics estimation. The contribution of this work is to propose a fused smart sensor network to estimate the forward kinematics of an industrial robot. The developed smart processor uses Kalman filters to filter and to fuse the information of the sensor network. Two primary sensors are used: an optical encoder, and a 3-axis accelerometer. In order to obtain the position and orientation of each joint online a field-programmable gate array (FPGA) is used in the hardware implementation taking advantage of the parallel computation capabilities and reconfigurability of this device. With the aim of evaluating the smart sensor network performance, three real-operation-oriented paths are executed and monitored in a 6-degree of freedom robot.

  17. Experimental Studies of Neural Network Control for One-Wheel Mobile Robot

    Directory of Open Access Journals (Sweden)

    P. K. Kim

    2012-01-01

    Full Text Available This paper presents development and control of a disc-typed one-wheel mobile robot, called GYROBO. Several models of the one-wheel mobile robot are designed, developed, and controlled. The current version of GYROBO is successfully balanced and controlled to follow the straight line. GYROBO has three actuators to balance and move. Two actuators are used for balancing control by virtue of gyro effect and one actuator for driving movements. Since the space is limited and weight balance is an important factor for the successful balancing control, careful mechanical design is considered. To compensate for uncertainties in robot dynamics, a neural network is added to the nonmodel-based PD-controlled system. The reference compensation technique (RCT is used for the neural network controller to help GYROBO to improve balancing and tracking performances. Experimental studies of a self-balancing task and a line tracking task are conducted to demonstrate the control performances of GYROBO.

  18. Hybrid Control of Long-Endurance Aerial Robotic Vehicles for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Deok-Jin Lee

    2011-06-01

    Full Text Available This paper presents an effective hybrid control approach for building stable wireless sensor networks between heterogeneous unmanned vehicles using long‐ endurance aerial robotic vehicles. For optimal deployment of the aerial vehicles in communication networks, a gradient climbing based self‐estimating control algorithm is utilized to locate the aerial platforms to maintain maximum communication throughputs between distributed multiple nodes. The autonomous aerial robots, which function as communication relay nodes, extract and harvest thermal energy from the atmospheric environment to improve their flight endurance within specified communication coverage areas. The rapidly‐deployable sensor networks with the high‐endurance aerial vehicles can be used for various application areas including environment monitoring, surveillance, tracking, and decision‐making support. Flight test and simulation studies are conducted to evaluate the effectiveness of the proposed hybrid control technique for robust communication networks.

  19. The middleware architecture supports heterogeneous network systems for module-based personal robot system

    Science.gov (United States)

    Choo, Seongho; Li, Vitaly; Choi, Dong Hee; Jung, Gi Deck; Park, Hong Seong; Ryuh, Youngsun

    2005-12-01

    On developing the personal robot system presently, the internal architecture is every module those occupy separated functions are connected through heterogeneous network system. This module-based architecture supports specialization and division of labor at not only designing but also implementation, as an effect of this architecture, it can reduce developing times and costs for modules. Furthermore, because every module is connected among other modules through network systems, we can get easy integrations and synergy effect to apply advanced mutual functions by co-working some modules. In this architecture, one of the most important technologies is the network middleware that takes charge communications among each modules connected through heterogeneous networks systems. The network middleware acts as the human nerve system inside of personal robot system; it relays, transmits, and translates information appropriately between modules that are similar to human organizations. The network middleware supports various hardware platform, heterogeneous network systems (Ethernet, Wireless LAN, USB, IEEE 1394, CAN, CDMA-SMS, RS-232C). This paper discussed some mechanisms about our network middleware to intercommunication and routing among modules, methods for real-time data communication and fault-tolerant network service. There have designed and implemented a layered network middleware scheme, distributed routing management, network monitoring/notification technology on heterogeneous networks for these goals. The main theme is how to make routing information in our network middleware. Additionally, with this routing information table, we appended some features. Now we are designing, making a new version network middleware (we call 'OO M/W') that can support object-oriented operation, also are updating program sources itself for object-oriented architecture. It is lighter, faster, and can support more operation systems and heterogeneous network systems, but other general

  20. Developmental word grounding through a growing neural network with a humanoid robot.

    Science.gov (United States)

    He, Xiaoyuan; Kojima, Ryo; Hasegawa, Osamu

    2007-04-01

    This paper presents an unsupervised approach of integrating speech and visual information without using any prepared data. The approach enables a humanoid robot, Incremental Knowledge Robot 1 (IKR1), to learn word meanings. The approach is different from most existing approaches in that the robot learns online from audio-visual input, rather than from stationary data provided in advance. In addition, the robot is capable of learning incrementally, which is considered to be indispensable to lifelong learning. A noise-robust self-organized growing neural network is developed to represent the topological structure of unsupervised online data. We are also developing an active-learning mechanism, called "desire for knowledge," to let the robot select the object for which it possesses the least information for subsequent learning. Experimental results show that the approach raises the efficiency of the learning process. Based on audio and visual data, they construct a mental model for the robot, which forms a basis for constructing IKRI's inner world and builds a bridge connecting the learned concepts with current and past scenes.

  1. A networked modular hardware and software system for MRI-guided robotic prostate interventions

    Science.gov (United States)

    Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.

    2012-02-01

    Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.

  2. Precise Localization and Formation Control of Swarm Robots via Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Han Wu

    2014-01-01

    Full Text Available Precise localization and formation control are one of the key technologies to achieve coordination and control of swarm robots, which is also currently a bottleneck for practical applications of swarm robotic systems. Aiming at overcoming the limited individual perception and the difficulty of achieving precise localization and formation, a localization approach combining dead reckoning (DR with wireless sensor network- (WSN- based methods is proposed in this paper. Two kinds of WSN localization technologies are adopted in this paper, that is, ZigBee-based RSSI (received signal strength indication global localization and electronic tag floors for calibration of local positioning. First, the DR localization information is combined with the ZigBee-based RSSI position information using the Kalman filter method to achieve precise global localization and maintain the robot formation. Then the electronic tag floors provide the robots with their precise coordinates in some local areas and enable the robot swarm to calibrate its formation by reducing the accumulated position errors. Hence, the overall performance of localization and formation control of the swarm robotic system is improved. Both of the simulation results and the experimental results on a real schematic system are given to demonstrate the success of the proposed approach.

  3. Samba: a real-time motion capture system using wireless camera sensor networks.

    Science.gov (United States)

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  4. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hyeongseok Oh

    2014-03-01

    Full Text Available There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject’s body. The performance of the motion capture system is evaluated extensively in experiments.

  5. Polish and European SST Assets: the Solaris-Panoptes Global Network of Robotic Telescopes and the Borowiec Satellite Laser Ranging System

    Science.gov (United States)

    Konacki, M.; Lejba, P.; Sybilski, P.; Pawłaszek, R.; Kozłowski, S.; Suchodolski, T.; Litwicki, M.; Kolb, U.; Burwitz, V.; Baader, J.; Groot, P.; Bloemen, S.; Ratajczak, M.; Helminiak, K.; Borek, R.; Chodosiewicz, P.

    2016-09-01

    We present the assets of the Nicolaus Copernicus Astronomical Center, Space Research Center (both of the Polish Academy of Sciences), two Polish companies Sybilla Technologies, Cillium Engineering and a non-profit research foundation Baltic Institute of Technology. These assets are enhanced by telescopes belonging to The Open University (UK), the Max Planck Institute for Extraterrestrial Physics and in the future the Radboud University. They consist of the Solaris-Panoptes global network of optical robotic telescopes and the satellite laser ranging station in Borowiec, Poland. These assets will contribute to the Polish and European Space Surveillance and Tracking (SST) program. The Solaris component is composed of four autonomous observatories in the Southern Hemisphere. Solaris nodes are located at the South African Astronomical Observatory (Solaris-1 and Solaris-2), Siding Spring Observatory, Australia (Solaris-3) and Complejo Astronomico El Leoncito, Argentina (Solaris-4). They are equipped with 0.5-m telescopes on ASA DDM-160 direct drive mounts, Andor iKon-L cameras and housed in 3.5-m Baader Planetarium (BP) clamshell domes. The Panoptes component is a network of telescopes operated by software from Sybilla Technologies. It currently consists of 4 telescopes at three locations, all on GM4000 mounts. One 0.36-m (Panoptes-COAST, STL- 1001E camera, 3.5 BP clamshell dome) and one 0.43-m (Panoptes-PIRATE, FLI 16803 camera, 4.5-m BP clamshell dome, with planned exchange to 0.63-m) telescope are located at the Teide Observatory (Tenerfie, Canary Islands), one 0.6-m (Panoptes-COG, SBIG STX 16803 camera, 4.5-m BP clamshell dome) telescope in Garching, Germany and one 0.5-m (Panoptes-MAM, FLI 16803 camera, 4.5-m BP slit dome) in Mammendorf, Germany. Panoptes-COAST and Panoptes-PIRATE are owned by The Open University (UK). Panoptes-COG is owned by the Max Planck Institute

  6. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  7. Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

    Science.gov (United States)

    Meng, Qinggang; Lee, M. H.

    2007-03-01

    Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.

  8. The use of time-of-flight camera for navigating robots in computer-aided surgery: monitoring the soft tissue envelope of minimally invasive hip approach in a cadaver study.

    Science.gov (United States)

    Putzer, David; Klug, Sebastian; Moctezuma, Jose Luis; Nogler, Michael

    2014-12-01

    Time-of-flight (TOF) cameras can guide surgical robots or provide soft tissue information for augmented reality in the medical field. In this study, a method to automatically track the soft tissue envelope of a minimally invasive hip approach in a cadaver study is described. An algorithm for the TOF camera was developed and 30 measurements on 8 surgical situs (direct anterior approach) were carried out. The results were compared to a manual measurement of the soft tissue envelope. The TOF camera showed an overall recognition rate of the soft tissue envelope of 75%. On comparing the results from the algorithm with the manual measurements, a significant difference was found (P > .005). In this preliminary study, we have presented a method for automatically recognizing the soft tissue envelope of the surgical field in a real-time application. Further improvements could result in a robotic navigation device for minimally invasive hip surgery. © The Author(s) 2014.

  9. Examining wildlife responses to phenology and wildfire using a landscape-scale camera trap network

    Science.gov (United States)

    Miguel L. Villarreal; Leila Gass; Laura Norman; Joel B. Sankey; Cynthia S. A. Wallace; Dennis McMacken; Jack L. Childs; Roy Petrakis

    2013-01-01

    Between 2001 and 2009, the Borderlands Jaguar Detection Project deployed 174 camera traps in the mountains of southern Arizona to record jaguar activity. In addition to jaguars, the motion-activated cameras, placed along known wildlife travel routes, recorded occurrences of ~ 20 other animal species. We examined temporal relationships of white-tailed deer (Odocoileus...

  10. The dynamic wave expansion neural network model for robot motion planning in time-varying environments.

    Science.gov (United States)

    Lebedev, Dmitry V; Steil, Jochen J; Ritter, Helge J

    2005-04-01

    We introduce a new type of neural network--the dynamic wave expansion neural network (DWENN)--for path generation in a dynamic environment for both mobile robots and robotic manipulators. Our model is parameter-free, computationally efficient, and its complexity does not explicitly depend on the dimensionality of the configuration space. We give a review of existing neural networks for trajectory generation in a time-varying domain, which are compared to the presented model. We demonstrate several representative simulative comparisons as well as the results of long-run comparisons in a number of randomly-generated scenes, which reveal that the proposed model yields dominantly shorter paths, especially in highly-dynamic environments.

  11. Road Networks Winter Risk Estimation Using On-Board Uncooled Infrared Camera for Surface Temperature Measurements over Two Lanes

    Directory of Open Access Journals (Sweden)

    M. Marchetti

    2011-01-01

    Full Text Available Thermal mapping has been implemented since the late eighties to establish the susceptibility of road networks to ice occurrence with measurements from a radiometer and some atmospheric parameters. They are usually done before dawn during wintertime when the road energy is dissipated. The objective of this study was to establish if an infrared camera could improve the determination of ice road susceptibility, to build a new winter risk index, to improve the measurements rate, and to analyze its consistency with seasons and infrastructures environment. Data analysis obtained from the conventional approved radiometer sensing technique and the infrared camera has shown great similarities. A comparison was made with promising perspectives. The measurement rate to analyse a given road network could be increased by a factor two.

  12. 75 FR 36456 - Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision...

    Science.gov (United States)

    2010-06-25

    ... From the Federal Register Online via the Government Publishing Office SECURITIES AND EXCHANGE COMMISSION Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision... concerning the securities of Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc.) because it has not...

  13. MSFC Robotic Lunar Lander Testbed and Current Status of the International Lunar Network (ILN) Anchor Nodes Mission

    Science.gov (United States)

    Cohen, Barbara; Bassler, Julie; Harris, Danny; Morse, Brian; Reed, Cheryl; Kirby, Karen; Eng, Douglas

    2009-01-01

    The lunar lander robotic exploration testbed at Marshall Spaceflight Center provides a test environment for robotic lander test articles, components and algorithms to reduce the risk on the airless body designs during lunar landing. Also included is a chart comparing the two different types of Anchor nodes for the International Lunar Network (ILN): Solar/Battery and the Advanced Stirling Radioisotope generator (ARSG.)

  14. Neural Network Control for the Linear Motion of a Spherical Mobile Robot

    Directory of Open Access Journals (Sweden)

    Yao Cai

    2011-09-01

    Full Text Available This paper discussed the stabilization and position tracking control of the linear motion of an underactuated spherical robot. By considering the actuator dynamics, a complete dynamic model of the robot is deduced, which is a complex third order, two variables nonlinear differential system and those two variables have strong coupling due to the mechanical structure of the robot. Different from traditional treatments, no linearization is applied to this system but a single‐input multiple‐output PID (SIMO_PID controller is designed by adopting a six‐input single‐ output CMAC_GBF (Cerebellar Model Articulation Controller with General Basis Function neural network to compensate the actuator nonlinearity and the credit assignment (CA learning method to obtain faster convergence of CMAC_GBF. The proposed controller is generalizable to other single‐input multiple‐output system with good real‐time capability. Simulations in Matlab are used to validate the control effects.

  15. Networked Control System for the Guidance of a Four-Wheel Steering Agricultural Robotic Platform

    Directory of Open Access Journals (Sweden)

    Eduardo Paciência Godoy

    2012-01-01

    Full Text Available A current trend in the agricultural area is the development of mobile robots and autonomous vehicles for precision agriculture (PA. One of the major challenges in the design of these robots is the development of the electronic architecture for the control of the devices. In a joint project among research institutions and a private company in Brazil a multifunctional robotic platform for information acquisition in PA is being designed. This platform has as main characteristics four-wheel propulsion and independent steering, adjustable width, span of 1,80 m in height, diesel engine, hydraulic system, and a CAN-based networked control system (NCS. This paper presents a NCS solution for the platform guidance by the four-wheel hydraulic steering distributed control. The control strategy, centered on the robot manipulators control theory, is based on the difference between the desired and actual position and considering the angular speed of the wheels. The results demonstrate that the NCS was simple and efficient, providing suitable steering performance for the platform guidance. Even though the simplicity of the NCS solution developed, it also overcame some verified control challenges in the robot guidance system design such as the hydraulic system delay, nonlinearities in the steering actuators, and inertia in the steering system due the friction of different terrains.

  16. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  17. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language–behavior relationships and the temporal patterns of interaction. Here, “internal dynamics” refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human’s linguistic instruction. After learning, the network actually formed the attractor structure representing both language–behavior relationships and the task’s temporal pattern in its internal dynamics. In the dynamics, language–behavior mapping was achieved by the branching structure. Repetition of human’s instruction and robot’s behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases. PMID:27471463

  18. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  19. Region-wide search and pursuit system using networked intelligent cameras

    Science.gov (United States)

    Komiya, Kazumi; Irisawa, Kouji

    2001-11-01

    This paper reports a study on new, region-wide search and pursuit system for missing objects such as stolen cars, wandering people, etc. By using image matching processes on the basis of the object properties such as color and shape, the intelligent camera can search the object. Then the camera transmits the properties to the next camera to pursue the object successively. The experimental results show that the system can judge 2 cars as search object among 40 cars under conditions of changing environment. Based on these data the proposed system can accomplish a fundamental step. Finally, research subjects have been picked up for advancement such as accurate shape extraction processing, camera structure for high speed processing and multimedia attributes such as sound.

  20. Occlusions in Camera Networks and Vision: The Bridge between Topological Recovery and Metric Reconstruction

    Science.gov (United States)

    2009-05-18

    sequence of a hand in front of a moving Macbeth board are shown. The detection of occlusions using local topological invariants is made with respect...where no inter-relationship between the cameras is known. It is natural to ask what the spatial relationship between cameras is. For surveillance...Overlap between the resulting regions is found by concurrent detections of markers. Hence, it is possible to build a simplicial complex as before. A natural

  1. Design of Optimal Hybrid Position/Force Controller for a Robot Manipulator Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Vikas Panwar

    2007-01-01

    Full Text Available The application of quadratic optimization and sliding-mode approach is considered for hybrid position and force control of a robot manipulator. The dynamic model of the manipulator is transformed into a state-space model to contain two sets of state variables, where one describes the constrained motion and the other describes the unconstrained motion. The optimal feedback control law is derived solving matrix differential Riccati equation, which is obtained using Hamilton Jacobi Bellman optimization. The optimal feedback control law is shown to be globally exponentially stable using Lyapunov function approach. The dynamic model uncertainties are compensated with a feedforward neural network. The neural network requires no preliminary offline training and is trained with online weight tuning algorithms that guarantee small errors and bounded control signals. The application of the derived control law is demonstrated through simulation with a 4-DOF robot manipulator to track an elliptical planar constrained surface while applying the desired force on the surface.

  2. Road network modeling in open source GIS to manage the navigation of autonomous robots

    Science.gov (United States)

    Mangiameli, Michele; Muscato, Giovanni; Mussumeci, Giuseppe

    2013-10-01

    The autonomous navigation of a robot can be accomplished through the assignment of a sequence of waypoints previously identified in the territory to be explored. In general, the starting point is a vector graph of the network consisting of possible paths. The vector graph can be directly available in the case of actual road networks, or it can be modeled, i.e. on the basis of cartographic supports or, even better, of a digital terrain model (DTM). In this paper we present software procedures developed in Grass-GIS, PostGIS and QGIS environments to identify, model, and visualize a road graph and to extract and normalize sequence of waypoints which can be transferred to a robot for its autonomous navigation.

  3. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements...... correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...... robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables...

  4. An artificial neural network controller based on MPSO-BFGS hybrid optimization for spherical flying robot

    Science.gov (United States)

    Liu, Xiaolin; Li, Lanfei; Sun, Hanxu

    2017-12-01

    Spherical flying robot can perform various tasks in the complex and varied environment to reduce labor costs. However, it is difficult to guarantee the stability of the spherical flying robot in the case of strong coupling and time-varying disturbance. In this paper, an artificial neural network controller (ANNC) based on MPSO-BFGS hybrid optimization algorithm is proposed. The MPSO algorithm is used to optimize the initial weights of the controller to avoid the local optimal solution. The BFGS algorithm is introduced to improve the convergence ability of the network. We use Lyapunov method to analyze the stability of ANNC. The controller is simulated under the condition of nonlinear coupling disturbance. The experimental results show that the proposed controller can obtain the expected value in shoter time compared with the other considered methods.

  5. Video-based convolutional neural networks for activity recognition from robot-centric videos

    Science.gov (United States)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  6. A mobile robots experimental environment with event-based wireless communication.

    Science.gov (United States)

    Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián

    2013-07-22

    An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented.

  7. Cyber-physical approach to the network-centric robotics control task

    Science.gov (United States)

    Muliukha, Vladimir; Ilyashenko, Alexander; Zaborovsky, Vladimir; Lukashin, Alexey

    2016-10-01

    Complex engineering tasks concerning control for groups of mobile robots are developed poorly. In our work for their formalization we use cyber-physical approach, which extends the range of engineering and physical methods for a design of complex technical objects by researching the informational aspects of communication and interaction between objects and with an external environment [1]. The paper analyzes network-centric methods for control of cyber-physical objects. Robots or cyber-physical objects interact with each other by transmitting information via computer networks using preemptive queueing system and randomized push-out mechanism [2],[3]. The main field of application for the results of our work is space robotics. The selection of cyber-physical systems as a special class of designed objects is due to the necessity of integrating various components responsible for computing, communications and control processes. Network-centric solutions allow using universal means for the organization of information exchange to integrate different technologies for the control system.

  8. Learning Efficiency of Consciousness System for Robot Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Osama Shoubaky

    2014-12-01

    Full Text Available This paper presents learning efficiency of a consciousness system for robot using artificial neural network. The proposed conscious system consists of reason system, feeling system and association system. The three systems are modeled using Module of Nerves for Advanced Dynamics (ModNAD. Artificial neural network of the type of supervised learning with the back propagation is used to train the ModNAD. The reason system imitates behaviour and represents self-condition and other-condition. The feeling system represents sensation and emotion. The association system represents behaviour of self and determines whether self is comfortable or not. A robot is asked to perform cognition and tasks using the consciousness system. Learning converges to about 0.01 within about 900 orders for imitation, pain, solitude and the association modules. It converges to about 0.01 within about 400 orders for the comfort and discomfort modules. It can be concluded that learning in the ModNAD completed after a relatively small number of times because the learning efficiency of the ModNAD artificial neural network is good. The results also show that each ModNAD has a function to imitate and cognize emotion. The consciousness system presented in this paper may be considered as a fundamental step for developing a robot having consciousness and feelings similar to humans.

  9. Using Single-Camera 3-D Imaging to Guide Material Handling Robots in a Nuclear Waste Package Closure System

    Energy Technology Data Exchange (ETDEWEB)

    Rodney M. Shurtliff

    2005-09-01

    Nuclear reactors for generating energy and conducting research have been in operation for more than 50 years, and spent nuclear fuel and associated high-level waste have accumulated in temporary storage. Preparing this spent fuel and nuclear waste for safe and permanent storage in a geological repository involves developing a robotic packaging system—a system that can accommodate waste packages of various sizes and high levels of nuclear radiation. During repository operation, commercial and government-owned spent nuclear fuel and high-level waste will be loaded into casks and shipped to the repository, where these materials will be transferred from the casks into a waste package, sealed, and placed into an underground facility. The waste packages range from 12 to 20 feet in height and four and a half to seven feet in diameter. Closure operations include sealing the waste package and all its associated functions, such as welding lids onto the container, filling the inner container with an inert gas, performing nondestructive examinations on welds, and conducting stress mitigation. The Idaho National Laboratory is designing and constructing a prototype Waste Package Closure System (WPCS). Control of the automated material handling is an important part of the overall design. Waste package lids, welding equipment, and other tools must be moved in and around the closure cell during the closure process. These objects are typically moved from tool racks to a specific position on the waste package to perform a specific function. Periodically, these objects are moved from a tool rack or the waste package to the adjacent glovebox for repair or maintenance. Locating and attaching to these objects with the remote handling system, a gantry robot, in a loosely fixtured environment is necessary for the operation of the closure cell. Reliably directing the remote handling system to pick and place the closure cell equipment within the cell is the major challenge.

  10. Combining Observations of a Digital Camera Network, Satellite Remote Sensing, and Micrometeorology for Improved Understanding of Forest Phenology

    Science.gov (United States)

    Braswell, B. H.; Richardson, A. D.; Ollinger, S. V.; Friedl, M. A.; Hollinger, D. Y.

    2009-04-01

    The observed phenological behavior of terrestrial ecosystems is a result of the seasonality of climatic forcing superposed with physical and biological responses of the plant-soil system. Biogeochemical models that represent rapid time scale phenomena well tend to simulate interannual variability and trends in productivity more accurately when phenology is prescribed, suggesting a gap in our understanding of the underlying processes or a generic means to represent their emergent behavior. Specifically, questions surround environmental triggers of leaf turnover, the relative importance of internal nutrient cycling, and the potential for generalization across broadly defined biome types. Satellite observations provide a spatially comprehensive record of the seasonality of land vegetation characteristics, but are most valuable when combined with direct measurements of ecosystem state. Time series of meteorology and fluxes (e.g. from eddy covariance tower sites) are one such data source, providing a valuable means to estimate productivity, but not a view of the state of the vegetation canopy. We have begun to assemble a network of digital cameras ('webcams') by deploying camera systems at existing research sites, and by harvesting imagery from collaborating sites and institutions. There are currently 80 cameras in the network, 17 of which are 'core' locations that are located at flux towers or field stations. We process and analyze the camera imagery as remote sensing data, utilizing the red, green, and blue, channels as a means to stratify the scenes and quantify relative vegetation 'greenness'. Our initial analyses have shown that these images do yield hourly-to-daily information about the seasonal cycle of vegetation state as compared both to fluxes and satellite indices. This presentation will summarize the current findings of the project, specifically focusing on (a) insights into controls on interannual variability at sites with long records (2000-present), and

  11. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  12. Optimized Node Deployment Algorithm and Parameter Investigation in a Mobile Sensor Network for Robotic Systems

    Directory of Open Access Journals (Sweden)

    Rongxin Tang

    2015-10-01

    Full Text Available Mobile sensor networks are an important part of modern robotics systems and are widely used in robotics applications. Therefore, sensor deployment is a key issue in current robotics systems research. Since it is one of the most popular deployment methods, in recent years the virtual force algorithm has been studied in detail by many scientists. In this paper, we focus on the virtual force algorithm and present a corresponding parameter investigation for mobile sensor deployment. We introduce an optimized virtual force algorithm based on the exchange force, in which a new shielding rule grounded in Delaunay triangulation is adopted. The algorithm employs a new performance metric called ‘pair-correlation diversion', designed to evaluate the uniformity and topology of the sensor distribution. We also discuss the implementation of the algorithm's computation and analyse the influence of experimental parameters on the algorithm. Our results indicate that the area ratio, φs, and the exchange force constant, G, influence the final performance of the sensor deployment in terms of the coverage rate, the convergence time and topology uniformity. Using simulations, we were able to verify the effectiveness of our algorithm and we obtained an optimal region for the (φs, G-parameter space which, in the future, could be utilized as an aid for experiments in robotic sensor deployment.

  13. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  14. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  15. Robot-Embodied Neuronal Networks as an Interactive Model of Learning.

    Science.gov (United States)

    Shultz, Abraham M; Lee, Sangmook; Guaraldi, Mary; Shea, Thomas B; Yanco, Holly C

    2017-01-01

    The reductionist approach of neuronal cell culture has been useful for analyses of synaptic signaling. Murine cortical neurons in culture spontaneously form an ex vivo network capable of transmitting complex signals, and have been useful for analyses of several fundamental aspects of neuronal development hitherto difficult to clarify in situ. However, these networks lack the ability to receive and respond to sensory input from the environment as do neurons in vivo. Establishment of these networks in culture chambers containing multi-electrode arrays allows recording of synaptic activity as well as stimulation. This article describes the embodiment of ex vivo neuronal networks neurons in a closed-loop cybernetic system, consisting of digitized video signals as sensory input and a robot arm as motor output. In this system, the neuronal network essentially functions as a simple central nervous system. This embodied network displays the ability to track a target in a naturalistic environment. These findings underscore that ex vivo neuronal networks can respond to sensory input and direct motor output. These analyses may contribute to optimization of neuronal-computer interfaces for perceptive and locomotive prosthetic applications. Ex vivo networks display critical alterations in signal patterns following treatment with subcytotoxic concentrations of amyloid-beta. Future studies including comparison of tracking accuracy of embodied networks prepared from mice harboring key mutations with those from normal mice, accompanied with exposure to Abeta and/or other neurotoxins, may provide a useful model system for monitoring subtle impairment of neuronal function as well as normal and abnormal development.

  16. Communication assisted Localization and Navigation for Networked Robots

    Science.gov (United States)

    2005-09-01

    the CSIRO helicopter team: Jonathan Roberts, Gregg Buskey, Srikanth Saripalli (University of Southern California), Graeme Winstanley, Leslie Overs... Culler , D. Estrin, , and S. Wicker. Complex behavior at scale: An experimental study of lowpower wireless sensor networks. UCLA Computer Science...Hawaii International Conference on System Sciences (HICSS ’00), Jan. 2000. 24 [23] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler , and K. Pister

  17. Development of compositional and contextual communicable congruence in robots by using dynamic neural network models.

    Science.gov (United States)

    Park, Gibeom; Tani, Jun

    2015-12-01

    The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Distributed Multiagent for NAO Robot Joint Position Control Based on Echo State Network

    Directory of Open Access Journals (Sweden)

    Ling Qin

    2015-01-01

    Full Text Available Based on echo state networks, the joints position control of NAO robot is studied in this paper. The process to control the robot position can be divided into two phases. The senor parameters are released during the first phase. Depending on the dynamic coupling effect between the angle acceleration of passive joint and the torque of active joint, passive joint can be controlled indirectly to the desired position along the desired trajectory. The ESN control rules during the first phase are described and ESN controller is designed to control the motion of passive joint. The brake is locked during the second phase; then active joint is controlled to the desired position. The experimental control system based on PMAC controller is designed and developed. Finally, the joint position control of the NAO robot is achieved successfully by experiments. Echo state networks utilized incremental updates driven by new sensor readings and massive short memory with history inputs; thus varying communication rates can help imitate human upper limb motion based on wearable sensors to obtain human joint angles.

  19. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lianfu Han

    2016-09-01

    Full Text Available A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP. As the wireless network nodes, three robots comprise the WSN a personal computer (PC in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA, and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well.

  20. Robotics

    Science.gov (United States)

    Ambrose, Robert O.

    2007-01-01

    Lunar robotic functions include: 1. Transport of crew and payloads on the surface of the moon; 2. Offloading payloads from a lunar lander; 3. Handling the deployment of surface systems; with 4. Human commanding of these functions from inside a lunar vehicle, habitat, or extravehicular (space walk), with Earth-based supervision. The systems that will perform these functions may not look like robots from science fiction. In fact, robotic functions may be automated trucks, cranes and winches. Use of this equipment prior to the crew s arrival or in the potentially long periods without crews on the surface, will require that these systems be computer controlled machines. The public release of NASA's Exploration plans at the 2nd Space Exploration Conference (Houston, December 2006) included a lunar outpost with as many as four unique mobility chassis designs. The sequence of lander offloading tasks involved as many as ten payloads, each with a unique set of geometry, mass and interface requirements. This plan was refined during a second phase study concluded in August 2007. Among the many improvements to the exploration plan were a reduction in the number of unique mobility chassis designs and a reduction in unique payload specifications. As the lunar surface system payloads have matured, so have the mobility and offloading functional requirements. While the architecture work continues, the community can expect to see functional requirements in the areas of surface mobility, surface handling, and human-systems interaction as follows: Surface Mobility 1. Transport crew on the lunar surface, accelerating construction tasks, expanding the crew s sphere of influence for scientific exploration, and providing a rapid return to an ascent module in an emergency. The crew transport can be with an un-pressurized rover, a small pressurized rover, or a larger mobile habitat. 2. Transport Extra-Vehicular Activity (EVA) equipment and construction payloads. 3. Transport habitats and

  1. Mentor's brain functional connectivity network during robotic assisted surgery mentorship.

    Science.gov (United States)

    Shafiei, Somayeh B; Doyle, Scott T; Guru, Khurshid A

    2016-08-01

    In many complicated cognitive-motor tasks mentoring is inevitable during the learning process. Although mentors are expert in doing the task, trainee's operation might be new for a mentor. This makes mentoring a very difficult task which demands not only the knowledge and experience of a mentor, but also his/her ability to follow trainee's movements and patiently advise him/her during the operation. We hypothesize that information binding throughout the mentor's brain areas, contributed to the task, changes as the expertise level of the trainee improves from novice to intermediate and expert. This can result in the change of mentor's level of satisfaction. The brain functional connectivity network is extracted by using brain activity of a mentor during mentoring novice and intermediate surgeons, watching expert surgeon operation, and doing Urethrovesical Anasthomosis (UVA) procedure by himself. By using the extracted network, we investigate the role of modularity and neural activity efficiency in mentoring. Brain activity is measured by using a 24-channel ABM Neuro-headset with the frequency of 256 Hz. One mentor operates 26 UVA procedures and three trainees with the expertise level of novice, intermediate, and expert perform 26 UVA procedures under the supervision of mentor. Our results indicate that the modularity of functional connectivity network is higher when mentor performs the task or watches the expert operation comparing mentoring the novice and intermediate surgeons. At the end of each operation, mentor subjectively assesses the quality of operation by giving scores to NASA-TLX indexes. Performance score is used to discuss our results. The extracted significant positive correlation between performance level and modularity (r = 0.38, p - value <; 0.005) shows the increase of automaticity and decrease in neural activity cost by improving the performance.

  2. A Deployment Method Based on Spring Force in Wireless Robot Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xiangyu Yu

    2014-05-01

    Full Text Available Robotic sensor deployment is fundamental for the effectiveness of wireless robot sensor networks-a good deployment algorithm leads to good coverage and connectivity with low energy consumption for the whole network. Virtual force-based algorithms (VFAs is one of the most popular approaches to this problem. In VFA, sensors are treated as points subject to repulsive and attractive forces exerted among them-sensors can move according to imaginary force generated in algorithms. In this paper, a virtual spring force-based algorithm with proper damping is proposed for the deployment of sensor nodes in a wireless sensor network (WSN. A new metric called Pair Correlation Diversion (PCD is introduced to evaluate the uniformity of the sensor distribution. Numerical simulations showed that damping can affect the network coverage, energy consumption, convergence time and general topology in the deployment. Moreover, it was found that damping effect (imaginary friction force has significant influence on algorithm outcomes. In addition, when working under approximate critical-damping condition, the proposed approach has the advantage of a higher coverage rate, better configurational uniformity and less energy consumption.

  3. Aperiodic linear networked control considering variable channel delays: application to robots coordination.

    Science.gov (United States)

    Santos, Carlos; Espinosa, Felipe; Santiso, Enrique; Mazo, Manuel

    2015-05-27

    One of the main challenges in wireless cyber-physical systems is to reduce the load of the communication channel while preserving the control performance. In this way, communication resources are liberated for other applications sharing the channel bandwidth. The main contribution of this work is the design of a remote control solution based on an aperiodic and adaptive triggering mechanism considering the current network delay of multiple robotics units. Working with the actual network delay instead of the maximum one leads to abandoning this conservative assumption, since the triggering condition is fixed depending on the current state of the network. This way, the controller manages the usage of the wireless channel in order to reduce the channel delay and to improve the availability of the communication resources. The communication standard under study is the widespread IEEE 802.11g, whose channel delay is clearly uncertain. First, the adaptive self-triggered control is validated through the TrueTime simulation tool configured for the mentioned WiFi standard. Implementation results applying the aperiodic linear control laws on four P3-DX robots are also included. Both of them demonstrate the advantage of this solution in terms of network accessing and control performance with respect to periodic and non-adaptive self-triggered alternatives.

  4. Aperiodic Linear Networked Control Considering Variable Channel Delays: Application to Robots Coordination

    Directory of Open Access Journals (Sweden)

    Carlos Santos

    2015-05-01

    Full Text Available One of the main challenges in wireless cyber-physical systems is to reduce the load of the communication channel while preserving the control performance. In this way, communication resources are liberated for other applications sharing the channel bandwidth. The main contribution of this work is the design of a remote control solution based on an aperiodic and adaptive triggering mechanism considering the current network delay of multiple robotics units. Working with the actual network delay instead of the maximum one leads to abandoning this conservative assumption, since the triggering condition is fixed depending on the current state of the network. This way, the controller manages the usage of the wireless channel in order to reduce the channel delay and to improve the availability of the communication resources. The communication standard under study is the widespread IEEE 802.11g, whose channel delay is clearly uncertain. First, the adaptive self-triggered control is validated through the TrueTime simulation tool configured for the mentioned WiFi standard. Implementation results applying the aperiodic linear control laws on four P3-DX robots are also included. Both of them demonstrate the advantage of this solution in terms of network accessing and control performance with respect to periodic and non-adaptive self-triggered alternatives.

  5. FPGA Implementation of Self-Organized Spiking Neural Network Controller for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Fangzheng Xue

    2014-06-01

    Full Text Available Spiking neural network, a computational model which uses spikes to process the information, is good candidate for mobile robot controller. In this paper, we present a novel mechanism for controlling mobile robots based on self-organized spiking neural network (SOSNN and introduce a method for FPGA implementation of this SOSNN. The spiking neuron we used is Izhikevich model. A key feature of this controller is that it can simulate the process of unconditioned reflex (avoid obstacles using infrared sensor signals and conditioned reflex (make right choices in multiple T-maze by spike timing-dependent plasticity (STDP learning and dopamine-receptor modulation. Experimental results show that the proposed controller is effective and is easy to implement. The FPGA implementation method aims to build up a specific network using generic blocks designed in the MATLAB Simulink environment. The main characteristics of this original solution are: on-chip learning algorithm implementation, high reconfiguration capability, and operation under real time constraints. An extended analysis has been carried out on the hardware resources used to implement the whole SOSNN network, as well as each individual component block.

  6. Aperiodic Linear Networked Control Considering Variable Channel Delays: Application to Robots Coordination

    Science.gov (United States)

    Santos, Carlos; Espinosa, Felipe; Santiso, Enrique; Mazo, Manuel

    2015-01-01

    One of the main challenges in wireless cyber-physical systems is to reduce the load of the communication channel while preserving the control performance. In this way, communication resources are liberated for other applications sharing the channel bandwidth. The main contribution of this work is the design of a remote control solution based on an aperiodic and adaptive triggering mechanism considering the current network delay of multiple robotics units. Working with the actual network delay instead of the maximum one leads to abandoning this conservative assumption, since the triggering condition is fixed depending on the current state of the network. This way, the controller manages the usage of the wireless channel in order to reduce the channel delay and to improve the availability of the communication resources. The communication standard under study is the widespread IEEE 802.11g, whose channel delay is clearly uncertain. First, the adaptive self-triggered control is validated through the TrueTime simulation tool configured for the mentioned WiFi standard. Implementation results applying the aperiodic linear control laws on four P3-DX robots are also included. Both of them demonstrate the advantage of this solution in terms of network accessing and control performance with respect to periodic and non-adaptive self-triggered alternatives. PMID:26024415

  7. An Artificial Neural Network Modeling for Force Control System of a Robotic Pruning Machine

    Directory of Open Access Journals (Sweden)

    Ali Hashemi

    2014-06-01

    Full Text Available Nowadays, there has been an increasing application of pruning robots for planted forests due to the growing concern on the efficiency and safety issues. Power consumption and working time of agricultural machines have become important issues due to the high value of energy in modern world. In this study, different multi-layer back-propagation networks were utilized for mapping the complex and highly interactive of pruning process parameters and to predict power consumption and cutting time of a force control equipped robotic pruning machine by knowing input parameters such as: rotation speed, stalk diameter, and sensitivity coefficient. Results showed significant effects of all input parameters on output parameters except rotational speed on cutting time. Therefore, for reducing the wear of cutting system, a less rotational speed in every sensitivity coefficient should be selected.

  8. Parametric motion control of robotic arms: A biologically based approach using neural networks

    Science.gov (United States)

    Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.

    1993-01-01

    A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.

  9. Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm

    Science.gov (United States)

    Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A.; Przekwas, Andrzej; Francis, Joseph T.; Lytton, William W.

    2015-01-01

    Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of

  10. Cortical spiking network interfaced with virtual musculoskeletal arm and robotic arm

    Directory of Open Access Journals (Sweden)

    Salvador eDura-Bernal

    2015-11-01

    Full Text Available Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm.This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuro-prosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility

  11. Creating Communications, Computing, and Networking Technology Development Road Maps for Future NASA Human and Robotic Missions

    Science.gov (United States)

    Bhasin, Kul; Hayden, Jeffrey L.

    2005-01-01

    For human and robotic exploration missions in the Vision for Exploration, roadmaps are needed for capability development and investments based on advanced technology developments. A roadmap development process was undertaken for the needed communications, and networking capabilities and technologies for the future human and robotics missions. The underlying processes are derived from work carried out during development of the future space communications architecture, an d NASA's Space Architect Office (SAO) defined formats and structures for accumulating data. Interrelationships were established among emerging requirements, the capability analysis and technology status, and performance data. After developing an architectural communications and networking framework structured around the assumed needs for human and robotic exploration, in the vicinity of Earth, Moon, along the path to Mars, and in the vicinity of Mars, information was gathered from expert participants. This information was used to identify the capabilities expected from the new infrastructure and the technological gaps in the way of obtaining them. We define realistic, long-term space communication architectures based on emerging needs and translate the needs into interfaces, functions, and computer processing that will be required. In developing our roadmapping process, we defined requirements for achieving end-to-end activities that will be carried out by future NASA human and robotic missions. This paper describes: 10 the architectural framework developed for analysis; 2) our approach to gathering and analyzing data from NASA, industry, and academia; 3) an outline of the technology research to be done, including milestones for technology research and demonstrations with timelines; and 4) the technology roadmaps themselves.

  12. Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.

    Science.gov (United States)

    Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W

    2015-01-01

    Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of

  13. Robotics

    Science.gov (United States)

    Rothschild, Lynn J.

    2012-01-01

    Earth's upper atmosphere is an extreme environment: dry, cold, and irradiated. It is unknown whether our aerobiosphere is limited to the transport of life, or there exist organisms that grow and reproduce while airborne (aerophiles); the microenvironments of suspended particles may harbor life at otherwise uninhabited altitudes[2]. The existence of aerophiles would significantly expand the range of planets considered candidates for life by, for example, including the cooler clouds of a hot Venus-like planet. The X project is an effort to engineer a robotic exploration and biosampling payload for a comprehensive survey of Earth's aerobiology. While many one-shot samples have been retrieved from above 15 km, their results are primarily qualitative; variations in method confound comparisons, leaving such major gaps in our knowledge of aerobiology as quantification of populations at different strata and relative species counts[1]. These challenges and X's preliminary solutions are explicated below. X's primary balloon payload is undergoing a series of calibrations before beginning flights in Spring 2012. A suborbital launch is currently planned for Summer 2012. A series of ground samples taken in Winter 2011 is being used to establish baseline counts and identify likely background contaminants.

  14. Community structure and diversity of tropical forest mammals: data from a global camera trap network.

    Science.gov (United States)

    Ahumada, Jorge A; Silva, Carlos E F; Gajapersad, Krisna; Hallam, Chris; Hurtado, Johanna; Martin, Emanuel; McWilliam, Alex; Mugerwa, Badru; O'Brien, Tim; Rovero, Francesco; Sheil, Douglas; Spironello, Wilson R; Winarni, Nurul; Andelman, Sandy J

    2011-09-27

    Terrestrial mammals are a key component of tropical forest communities as indicators of ecosystem health and providers of important ecosystem services. However, there is little quantitative information about how they change with local, regional and global threats. In this paper, the first standardized pantropical forest terrestrial mammal community study, we examine several aspects of terrestrial mammal species and community diversity (species richness, species diversity, evenness, dominance, functional diversity and community structure) at seven sites around the globe using a single standardized camera trapping methodology approach. The sites-located in Uganda, Tanzania, Indonesia, Lao PDR, Suriname, Brazil and Costa Rica-are surrounded by different landscape configurations, from continuous forests to highly fragmented forests. We obtained more than 51 000 images and detected 105 species of mammals with a total sampling effort of 12 687 camera trap days. We find that mammal communities from highly fragmented sites have lower species richness, species diversity, functional diversity and higher dominance when compared with sites in partially fragmented and continuous forest. We emphasize the importance of standardized camera trapping approaches for obtaining baselines for monitoring forest mammal communities so as to adequately understand the effect of global, regional and local threats and appropriately inform conservation actions.

  15. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural...... mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...... correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...

  16. Experiments in Neural-Network Control of a Free-Flying Space Robot

    Science.gov (United States)

    Wilson, Edward

    1995-01-01

    Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype. The first issue concerns the importance of true system level design of the control system. A new hybrid strategy is developed here, in depth, for the beneficial integration of neural networks into the total control system. A second important issue in neural network control concerns incorporating a priori knowledge into the neural network. In many applications, it is possible to get a reasonably accurate controller using conventional means. If this prior information is used purposefully to provide a starting point for the optimizing capabilities of the neural network, it can provide much faster initial learning. In a step towards addressing this issue, a new generic Fully Connected Architecture (FCA) is developed for use with backpropagation. A third issue is that neural networks are commonly trained using a gradient based optimization method such as backpropagation; but many real world systems have Discrete Valued Functions (DVFs) that do not permit gradient based optimization. One example is the on-off thrusters that are common on spacecraft. A new technique is developed here that now extends backpropagation learning for use with DVFs. The fourth issue is that the speed of adaptation is often a limiting factor in the implementation of a neural network control system. This issue has been strongly resolved in the research by drawing on the above new contributions.

  17. Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.

    Science.gov (United States)

    Herrero, David; Martínez, Humberto

    2011-01-01

    This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.

  18. Intelligent control of robotic arm/hand systems for the NASA EVA retriever using neural networks

    Science.gov (United States)

    Mclauchlan, Robert A.

    1989-01-01

    Adaptive/general learning algorithms using varying neural network models are considered for the intelligent control of robotic arm plus dextrous hand/manipulator systems. Results are summarized and discussed for the use of the Barto/Sutton/Anderson neuronlike, unsupervised learning controller as applied to the stabilization of an inverted pendulum on a cart system. Recommendations are made for the application of the controller and a kinematic analysis for trajectory planning to simple object retrieval (chase/approach and capture/grasp) scenarios in two dimensions.

  19. An FPGA hardware/software co-design towards evolvable spiking neural networks for robotics application.

    Science.gov (United States)

    Johnston, S P; Prasad, G; Maguire, L; McGinnity, T M

    2010-12-01

    This paper presents an approach that permits the effective hardware realization of a novel Evolvable Spiking Neural Network (ESNN) paradigm on Field Programmable Gate Arrays (FPGAs). The ESNN possesses a hybrid learning algorithm that consists of a Spike Timing Dependent Plasticity (STDP) mechanism fused with a Genetic Algorithm (GA). The design and implementation direction utilizes the latest advancements in FPGA technology to provide a partitioned hardware/software co-design solution. The approach achieves the maximum FPGA flexibility obtainable for the ESNN paradigm. The algorithm was applied as an embedded intelligent system robotic controller to solve an autonomous navigation and obstacle avoidance problem.

  20. Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.

    Science.gov (United States)

    Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A

    2017-07-01

    Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.

  1. Multiobjective Evolution of Biped Robot Gaits Using Advanced Continuous Ant-Colony Optimized Recurrent Neural Networks.

    Science.gov (United States)

    Juang, Chia-Feng; Yeh, Yen-Ting

    2017-06-30

    This paper proposes the optimization of a fully connected recurrent neural network (FCRNN) using advanced multiobjective continuous ant colony optimization (AMO-CACO) for the multiobjective gait generation of a biped robot (the NAO). The FCRNN functions as a central pattern generator and is optimized to generate angles of the hip roll and pitch, the knee pitch, and the ankle pitch and roll. The performance of the FCRNN-generated gait is evaluated according to the walking speed, trajectory straightness, oscillations of the body in the pitch and yaw directions, and walking posture, subject to the basic constraints that the robot cannot fall down and must walk forward. This paper formulates this gait generation task as a constrained multiobjective optimization problem and solves this problem through an AMO-CACO-based evolutionary learning approach. The AMO-CACO finds Pareto optimal solutions through ant-path selection and sampling operations by introducing an accumulated rank for the solutions in each single-objective function into solution sorting to improve learning performance. Simulations are conducted to verify the AMO-CACO-based FCRNN gait generation performance through comparisons with different multiobjective optimization algorithms. Selected software-designed Pareto optimal FCRNNs are then applied to control the gait of a real NAO robot.

  2. The Presentation of a New Method for Image Distinction with Robot by Using Rough Fuzzy Sets and Rough Fuzzy Neural Network Classifier

    OpenAIRE

    Maryam Shahabi Lotfabadi

    2011-01-01

    Distinguishing different images by robots and classifying them in distinct groups is an important issue in robot vision. In this paper we want to propose a new method for distinguishing images by robot via using Rough fuzzy sets' decreases method and Rough fuzzy neural network classifier. In this method, the image features like color, texture and shape are excluded and the redundant features are decreased by Rough fuzzy sets method. Then the Rough fuzzy neural network classifier is educated b...

  3. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  4. Learning Spatial Object Localization from Vision on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jürgen Leitner

    2012-12-01

    Full Text Available We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN and Genetic Programming (GP, are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

  5. Adaptive Robust Control for Space Robot with Ucertainty base on Neural Network

    Directory of Open Access Journals (Sweden)

    Zhang Wenhui

    2013-11-01

    Full Text Available The trajectory tracking problems of a class of space robot manipulators with parameters and non-parameters uncertainty are considered. An adaptive robust control algorithm based on neural network is proposed by the paper. Neutral network is used to adaptive learn and compensate the unknown system for parameters uncertainties? the weight adaptive laws are designed by the paper? System stability base on Lyapunov theory is analysised to ensure the convergence of the algorithm. Non-parameters uncertainties are estimated and compensated by robust controller. It is proven that the designed controller can guarantee the asymptotic convergence of tracking error. The controller could guarantee good robust and the stability of closed-loop system. The simulation results show that the presented method is effective.

  6. Cellular Nonlinear Networks for the emergence of perceptual states: application to robot navigation control.

    Science.gov (United States)

    Arena, Paolo; De Fiore, Sebastiano; Patané, Luca

    2009-01-01

    In this paper a new general purpose perceptual control architecture, based on nonlinear neural lattices, is presented and applied to solve robot navigation tasks. Insects show the ability to react to certain stimuli with simple reflexes, using direct sensory-motor pathways, which can be considered as basic behaviors, inherited and pre-wired. Relevant brain centres, known as Mushroom Bodies (MB) and Central Complex (CX) were recently identified in insects: though their functional details are not yet fully understood, it is known that they provide secondary pathways allowing the emergence of cognitive behaviors. These are gained through the coordination of the basic abilities to satisfy the insect's needs. Taking inspiration from this evidence, our architecture modulates, through a reinforcement learning, a set of competitive and concurrent basic behaviors in order to accomplish the task assigned through a reward function. The core of the architecture is constituted by the so-called Representation layer, used to create a concise picture of the current environment situation, fusing together different stimuli for the emergence of perceptual states. These perceptual states are steady state solutions of lattices of Reaction-Diffusion Cellular Nonlinear Networks (RD-CNN), designed to show Turing patterns. The exploitation of the dynamics of the multiple equilibria of the network is emphasized through the adaptive shaping of the basins of attraction for each emerged pattern. New experimental campaigns on standard robotic platforms are reported to demonstrate the potentiality and the effectiveness of the approach.

  7. A Scalable Neuro-inspired Robot Controller Integrating a Machine Learning Algorithm and a Spiking Cerebellar-like Network

    DEFF Research Database (Denmark)

    Baira Ojeda, Ismael; Tolu, Silvia; Lund, Henrik Hautop

    2017-01-01

    the Locally Weighted Projection Regression algorithm (LWPR) and a spiking cerebellar-like microcircuit. The LWPR guarantees both an optimized representation of the input space and the learning of the dynamic internal model (IM) of the robot. However, the cerebellar-like sub-circuit integrates LWPR input......Combining Fable robot, a modular robot, with a neuroinspired controller, we present the proof of principle of a system that can scale to several neurally controlled compliant modules. The motor control and learning of a robot module are carried out by a Unit Learning Machine (ULM) that embeds......-driven contributions to deliver accurate corrective commands to the global IM. This article extends the earlier work by including the Deep Cerebellar Nuclei (DCN) and by reproducing the Purkinje and the DCN layers using a spiking neural network (SNN) implemented on the neuromorphic SpiNNaker platform. The performance...

  8. Hand Motion and Posture Recognition in a Network of Calibrated Cameras

    Directory of Open Access Journals (Sweden)

    Jingya Wang

    2017-01-01

    Full Text Available This paper presents a vision-based approach for hand gesture recognition which combines both trajectory and hand posture recognition. The hand area is segmented by fixed-range CbCr from cluttered and moving backgrounds and tracked by Kalman Filter. With the tracking results of two calibrated cameras, the 3D hand motion trajectory can be reconstructed. It is then modeled by dynamic movement primitives and a support vector machine is trained for trajectory recognition. Scale-invariant feature transform is employed to extract features on segmented hand postures, and a novel strategy for hand posture recognition is proposed. A gesture vector is introduced to recognize hand gesture as an entirety which combines the recognition results of motion trajectory and hand postures where a support vector machine is trained for gesture recognition based on gesture vectors.

  9. Modeling and Error Compensation of Robotic Articulated Arm Coordinate Measuring Machines Using BP Neural Network

    Directory of Open Access Journals (Sweden)

    Guanbin Gao

    2017-01-01

    Full Text Available Articulated arm coordinate measuring machine (AACMM is a specific robotic structural instrument, which uses D-H method for the purpose of kinematic modeling and error compensation. However, it is difficult for the existing error compensation models to describe various factors, which affects the accuracy of AACMM. In this paper, a modeling and error compensation method for AACMM is proposed based on BP Neural Networks. According to the available measurements, the poses of the AACMM are used as the input, and the coordinates of the probe are used as the output of neural network. To avoid tedious training and improve the training efficiency and prediction accuracy, a data acquisition strategy is developed according to the actual measurement behavior in the joint space. A neural network model is proposed and analyzed by using the data generated via Monte-Carlo method in simulations. The structure and parameter settings of neural network are optimized to improve the prediction accuracy and training speed. Experimental studies have been conducted to verify the proposed algorithm with neural network compensation, which shows that 97% error of the AACMM can be eliminated after compensation. These experimental results have revealed the effectiveness of the proposed modeling and compensation method for AACMM.

  10. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  11. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots

    Science.gov (United States)

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-01-01

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. PMID:26528986

  12. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots

    Directory of Open Access Journals (Sweden)

    Jun-Young Jung

    2015-10-01

    Full Text Available An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  13. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots.

    Science.gov (United States)

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-10-30

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  14. Interpreting canopy development and physiology using a European phenology camera network at flux sites

    DEFF Research Database (Denmark)

    Wingate, L.; Ogeé, J.; Cremonese, E.

    2015-01-01

    in canopy phenology could be detected automatically across different land use types in the network. The piecewise regression approach could capture the start and end of the growing season, in addition to identifying striking changes in colour signals caused by flowering and management practices...

  15. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    Science.gov (United States)

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  16. A study on the sensitivity of photogrammetric camera calibration and stitching

    CSIR Research Space (South Africa)

    De

    2014-11-01

    Full Text Available This paper presents a detailed simulation study of an automated robotic photogrammetric camera calibration system. The system performance was tested for sensitivity with regard to noise in the robot movement, camera mounting and image processing...

  17. Intelligent Control of Welding Gun Pose for Pipeline Welding Robot Based on Improved Radial Basis Function Network and Expert System

    OpenAIRE

    Jingwen Tian; Meijuan Gao; Yonggang He

    2013-01-01

    Since the control system of the welding gun pose in whole‐position welding is complicated and nonlinear, an intelligent control system of welding gun pose for a pipeline welding robot based on an improved radial basis function neural network (IRBFNN) and expert system (ES) is presented in this paper. The structure of the IRBFNN is constructed and the improved genetic algorithm is adopted to optimize the network structure. This control system makes full use of the characteristics of the IRBFNN...

  18. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    Science.gov (United States)

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  19. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  20. Cameras, Radios, and Butterflies: the Influence and Importance of Fan Networks for Game Studies

    Directory of Open Access Journals (Sweden)

    Laurie N. Taylor

    2006-01-01

    Full Text Available As academic game studies emerges as a growing, interdisciplinary, and varied field, researchers require additional resources in order to study games in a larger context. Fan networks produce many such resources often otherwise unavailable - including walkthroughs, hint guides, and other forms of fan research - which are significant for the academic study of games. While professionally produced walkthroughs, game guides, and other research materials are available for the majority of new, popular games, many games never have walkthroughs, and older walkthroughs are often largely unavailable.

  1. Surgical-tools detection based on Convolutional Neural Network in laparoscopic robot-assisted surgery.

    Science.gov (United States)

    Bareum Choi; Kyungmin Jo; Songe Choi; Jaesoon Choi

    2017-07-01

    Laparoscopic surgery, a type of minimally invasive surgery, is used in a variety of clinical surgeries because it has a faster recovery rate and causes less pain. However, in general, the robotic system used in laparoscopic surgery can cause damage to the surgical instruments, organs, or tissues during surgery due to a narrow field of view and operating space, and insufficient tactile feedback. This study proposes real-time models for the detection of surgical instruments during laparoscopic surgery by using a CNN(Convolutional Neural Network). A dataset included information of the 7 surgical tools is used for learning CNN. To track surgical instruments in real time, unified architecture of YOLO apply to the models. So as to evaluate performance of the suggested models, degree of recall and precision is calculated and compared. Finally, we achieve 72.26% mean average precision over our dataset.

  2. Robotic assisted laparoscopic colectomy.

    LENUS (Irish Health Repository)

    Pandalai, S

    2010-06-01

    Robotic surgery has evolved over the last decade to compensate for limitations in human dexterity. It avoids the need for a trained assistant while decreasing error rates such as perforations. The nature of the robotic assistance varies from voice activated camera control to more elaborate telerobotic systems such as the Zeus and the Da Vinci where the surgeon controls the robotic arms using a console. Herein, we report the first series of robotic assisted colectomies in Ireland using a voice activated camera control system.

  3. The development of advanced robotics technology in high radiation environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs.

  4. Network analysis of surgical innovation: Measuring value and the virality of diffusion in robotic surgery.

    Science.gov (United States)

    Garas, George; Cingolani, Isabella; Panzarasa, Pietro; Darzi, Ara; Athanasiou, Thanos

    2017-01-01

    Existing surgical innovation frameworks suffer from a unifying limitation, their qualitative nature. A rigorous approach to measuring surgical innovation is needed that extends beyond detecting simply publication, citation, and patent counts and instead uncovers an implementation-based value from the structure of the entire adoption cascades produced over time by diffusion processes. Based on the principles of evidence-based medicine and existing surgical regulatory frameworks, the surgical innovation funnel is described. This illustrates the different stages through which innovation in surgery typically progresses. The aim is to propose a novel and quantitative network-based framework that will permit modeling and visualizing innovation diffusion cascades in surgery and measuring virality and value of innovations. Network analysis of constructed citation networks of all articles concerned with robotic surgery (n = 13,240, Scopus®) was performed (1974-2014). The virality of each cascade was measured as was innovation value (measured by the innovation index) derived from the evidence-based stage occupied by the corresponding seed article in the surgical innovation funnel. The network-based surgical innovation metrics were also validated against real world big data (National Inpatient Sample-NIS®). Rankings of surgical innovation across specialties by cascade size and structural virality (structural depth and width) were found to correlate closely with the ranking by innovation value (Spearman's rank correlation coefficient = 0.758 (p = 0.01), 0.782 (p = 0.008), 0.624 (p = 0.05), respectively) which in turn matches the ranking based on real world big data from the NIS® (Spearman's coefficient = 0.673;p = 0.033). Network analysis offers unique new opportunities for understanding, modeling and measuring surgical innovation, and ultimately for assessing and comparing generative value between different specialties. The novel surgical innovation metrics developed may

  5. Synergistic Sensory Platform: Robotic Nurse

    Directory of Open Access Journals (Sweden)

    Dale Wick

    2013-05-01

    Full Text Available This paper presents the concept, structural design and implementation of components of a multifunctional sensory network, consisting of a Mobile Robotic Platform (MRP and stationary multifunctional sensors, which are wirelessly communicating with the MRP. Each section provides the review of the principles of operation and the network components’ practical implementation. The analysis is focused on the structure of the robotic platform, sensory network and electronics and on the methods of the environment monitoring and data processing algorithms that provide maximal reliability, flexibility and stable operability of the system. The main aim of this project is the development of the Robotic Nurse (RN—a 24/7 robotic helper for the hospital nurse personnel. To support long-lasting autonomic operation of the platform, all mechanical, electronic and photonic components were designed to provide minimal weight, size and power consumption, while still providing high operational efficiency, accuracy of measurements and adequateness of the sensor response. The stationary sensors serve as the remote “eyes, ears and noses” of the main MRP. After data acquisition, processing and analysing, the robot activates the mobile platform or specific sensors and cameras. The cross-use of data received from sensors of different types provides high reliability of the system. The key RN capabilities are simultaneous monitoring of physical conditions of a large number of patients and alarming in case of an emergency. The robotic platform Nav-2 exploits innovative principles of any-direction motion with omni-wheels, navigation and environment analysis. It includes an innovative mini-laser, the absorption spectrum analyser and a portable, extremely high signal-to-noise ratio spectrometer with two-dimensional detector array.

  6. Cloud detection and movement estimation based on sky camera images using neural networks and the Lucas-Kanade method

    Science.gov (United States)

    Tuominen, Pekko; Tuononen, Minttu

    2017-06-01

    One of the key elements in short-term solar forecasting is the detection of clouds and their movement. This paper discusses a new method for extracting cloud cover and cloud movement information from ground based camera images using neural networks and the Lucas-Kanade method. Two novel features of the algorithm are that it performs well both inside and outside of the circumsolar region, i.e. the vicinity of the sun, and is capable of deciding a threefold sun state. More precisely, the sun state can be detected to be either clear, partly covered by clouds or overcast. This is possible due to the absence of a shadow band in the imaging system. Visual validation showed that the new algorithm performed well in detecting clouds of varying color and contrast in situations referred to as difficult for commonly used thresholding methods. Cloud motion field results were computed from two consecutive sky images by solving the optical flow problem with the fast to compute Lucas-Kanade method. A local filtering scheme developed in this study was used to remove noisy motion vectors and it is shown that this filtering technique results in a motion field with locally nearly uniform directions and smooth global changes in direction trends. Thin, transparent clouds still pose a challenge for detection and leave room for future improvements in the algorithm.

  7. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network.

    Science.gov (United States)

    Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung

    2016-12-16

    Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.

  8. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Husan Vokhidov

    2016-12-01

    Full Text Available Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS, installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.

  9. Monitoring landscape-level distribution and migration Phenology of Raptors using a volunteer camera-trap network

    Science.gov (United States)

    Jachowski, David S.; Katzner, Todd; Rodrigue, Jane L.; Ford, W. Mark

    2015-01-01

    Conservation of animal migratory movements is among the most important issues in wildlife management. To address this need for landscape-scale monitoring of raptor populations, we developed a novel, baited photographic observation network termed the “Appalachian Eagle Monitoring Program” (AEMP). During winter months of 2008–2012, we partnered with professional and citizen scientists in 11 states in the United States to collect approximately 2.5 million images. To our knowledge, this represents the largest such camera-trap effort to date. Analyses of data collected in 2011 and 2012 revealed complex, often species-specific, spatial and temporal patterns in winter raptor movement behavior as well as spatial and temporal resource partitioning between raptor species. Although programmatic advances in data analysis and involvement are needed, the continued growth of the program has the potential to provide a long-term, cost-effective, range-wide monitoring tool for avian and terrestrial scavengers during the winter season. Perhaps most importantly, by relying heavily on citizen scientists, AEMP has the potential to improve long-term interest and support for raptor conservation and serve as a model for raptor conservation programs in other portions of the world.

  10. Exploring the Acquisition and Production of Grammatical Constructions Through Human-Robot Interaction with Echo State Networks

    Directory of Open Access Journals (Sweden)

    Xavier eHinaut

    2014-05-01

    Full Text Available One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot’s execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e. in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  11. Learning for intelligent mobile robots

    Science.gov (United States)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A

  12. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  13. Robotized transcranial magnetic stimulation

    CERN Document Server

    Richter, Lars

    2014-01-01

    Presents new, cutting-edge algorithms for robot/camera calibration, sensor fusion and sensor calibration Explores the main challenges for accurate coil positioning, such as head motion, and outlines how active robotic motion compensation can outperform hand-held solutions Analyzes how a robotized system in medicine can alleviate concerns with a patient's safety, and presents a novel fault-tolerant algorithm (FTA) sensor for system safety

  14. Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  15. Fused Smart Sensor Network for Multi-Axis Forward Kinematics Estimation in Industrial Robots

    OpenAIRE

    Rene de Jesus Romero-Troncoso; Jesus Rooney Rivera-Guillen; Roque Alfredo Osornio-Rios; Carlos Rodriguez-Donate

    2011-01-01

    Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of h...

  16. [Robotic surgery].

    Science.gov (United States)

    Moreno-Portillo, Mucio; Valenzuela-Salazar, Carlos; Quiroz-Guadarrama, César David; Pachecho-Gahbler, Carlos; Rojano-Rodríguez, Martín

    2014-12-01

    Medicine has experienced greater scientific and technological advances in the last 50 years than in the rest of human history. The article describes relevant events, revises concepts and advantages and clinical applications, summarizes published clinical results, and presents some personal reflections without giving dogmatic conclusions about robotic surgery. The Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) defines robotic surgery as a surgical procedure using technology to aid the interaction between surgeon and patient. The objective of the surgical robot is to correct human deficiencies and improve surgical skills. The capacity of repeating tasks with precision and reproducibility has been the base of the robot´s success. Robotic technology offers objective and measurable advantages: - Improving maneuverability and physical capacity during surgery. - Correcting bad postural habits and tremor. - Allowing depth perception (3D images). - Magnifying strength and movement limits. - Offering a platform for sensors, cameras, and instruments. Endoscopic surgery transformed conceptually the way of practicing surgery. Nevertheless in the last decade, robotic assisted surgery has become the next paradigm of our era.

  17. Optimized intelligent control of a 2-degree of freedom robot for rehabilitation of lower limbs using neural network and genetic algorithm.

    Science.gov (United States)

    Aminiazar, Wahab; Najafi, Farid; Nekoui, Mohammad Ali

    2013-08-14

    There is an increasing trend in using robots for medical purposes. One specific area is rehabilitation. Rehabilitation is one of the non-drug treatments in community health which means the restoration of the abilities to maximize independence. It is a prolonged work and costly labor. On the other hand, by using the flexible and efficient robots in rehabilitation area, this process will be more useful for handicapped patients. In this study, a rule-based intelligent control methodology is proposed to mimic the behavior of a healthy limb in a satisfactory way by a 2-DOF planar robot. Inverse kinematic of the planar robot will be solved by neural networks and control parameters will be optimized by genetic algorithm, as rehabilitation progress. The results of simulations are presented by defining a physiotherapy simple mode on desired trajectory. MATLAB/Simulink is used for simulations. The system is capable of learning the action of the physiotherapist for each patient and imitating this behaviour in the absence of a physiotherapist that can be called robotherapy. In this study, a therapeutic exercise planar 2-DOF robot is designed and controlled for lower-limb rehabilitation. The robot manipulator is controlled by combination of hybrid and adaptive controls. Some safety factors and stability constraints are defined and obtained. The robot is stopped when the safety factors are not satisfied. Kinematics of robot is estimated by an MLP neural network and proper control parameters are achieved using GA optimization.

  18. Exploring the effects of dimensionality reduction in deep networks for force estimation in robotic-assisted surgery

    Science.gov (United States)

    Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia

    2016-03-01

    Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.

  19. Mobile robot nonlinear feedback control based on Elman neural network observer

    Directory of Open Access Journals (Sweden)

    Khaled Al-Mutib

    2015-12-01

    Full Text Available This article presents a new approach to control a wheeled mobile robot without velocity measurement. The controller developed is based on kinematic model as well as dynamics model to take into account parameters of dynamics. These parameters related to dynamic equations are identified using a proposed methodology. Input–output feedback linearization is considered with a slight modification in the mathematical expressions to implement the dynamic controller and analyze the nonlinear internal behavior. The developed controllers require sensors to obtain the states needed for the closed-loop system. However, some states may not be available due to the absence of the sensors because of the cost, the weight limitation, reliability, induction of errors, failure, and so on. Particularly, for the velocity measurements, the required accuracy may not be achieved in practical applications due to the existence of significant errors induced by stochastic or cyclical noise. In this article, Elman neural network is proposed to work as an observer to estimate the velocity needed to complete the full state required for the closed-loop control and account for all the disturbances and model parameter uncertainties. Different simulations are carried out to demonstrate the feasibility of the approach in tracking different reference trajectories in comparison with other paradigms.

  20. Intercontinental network control platform and robotic observation for Chinese Antarctic telescopes

    Science.gov (United States)

    Xu, Lingzhe

    2012-09-01

    The Chinese astronomical exploration in Antarctic region has been initialized and stepped forward. The R&D roadmap in this regard identifies each progressive step. For the past several years China has set up Kunlun station at Antarctic Dome-A, and Chinese Small Telescope ARray (CSTAR) has already been up and running regularly. In addition, Antarctic Schmidt Telescopes (AST3_1) was transported to the area in the year of 2011 and has recently been placed in service for some time and followed with telescopes in larger size predictably more to come. Antarctic region is one of a few best sites left on the Earth for astronomical telescope observation, yet with worst fundamental living conditions for human survival and activities. To meet such a tough challenge it is essential to establish an efficient and reliable means of remote access for telescope routine observation. This paper outlines the remote communication for CSTAR and AST3_1, and further proposes an intercontinental network control platform for Chinese Antarctic telescope array with remote full-automatic control and robotic observation and management. A number of technical issues for telescope access such as the unattended operation, the bandwidth based on iridium satellite transmission as well as the means of reliable and secure communication among other things are all reviewed and further analyzed.

  1. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications †

    Science.gov (United States)

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Muroyama, Masanori

    2017-01-01

    Robot tactile sensation can enhance human–robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as “sensor platform LSI”) as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated. PMID:29061954

  2. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study

    Directory of Open Access Journals (Sweden)

    Nocchi Federico

    2012-07-01

    Full Text Available Abstract Background The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb and non-biological (abstract object movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. Methods A visual functional Magnetic Resonance Imaging (fMRI task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. Results The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes. Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. Conclusions This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain’s ability to assimilate abstract object movements with human motor gestures. In both conditions

  3. Connectivity-Preserving Approach for Distributed Adaptive Synchronized Tracking of Networked Uncertain Nonholonomic Mobile Robots.

    Science.gov (United States)

    Yoo, Sung Jin; Park, Bong Seok

    2017-09-06

    This paper addresses a distributed connectivity-preserving synchronized tracking problem of multiple uncertain nonholonomic mobile robots with limited communication ranges. The information of the time-varying leader robot is assumed to be accessible to only a small fraction of follower robots. The main contribution of this paper is to introduce a new distributed nonlinear error surface for dealing with both the synchronized tracking and the preservation of the initial connectivity patterns among nonholonomic robots. Based on this nonlinear error surface, the recursive design methodology is presented to construct the approximation-based local adaptive tracking scheme at the robot dynamic level. Furthermore, a technical lemma is established to analyze the stability and the connectivity preservation of the total closed-loop control system in the Lyapunov sense. An example is provided to illustrate the effectiveness of the proposed methodology.

  4. Developmental word acquisition and grammar learning by humanoid robots through a self-organizing incremental neural network.

    Science.gov (United States)

    He, Xiaoyuan; Ogura, Tomotaka; Satou, Akihiro; Hasegawa, Osamu

    2007-10-01

    We present a new approach for online incremental word acquisition and grammar learning by humanoid robots. Using no data set provided in advance, the proposed system grounds language in a physical context, as mediated by its perceptual capacities. It is carried out using show-and-tell procedures, interacting with its human partner. Moreover, this procedure is open-ended for new words and multiword utterances. These facilities are supported by a self-organizing incremental neural network, which can execute online unsupervised classification and topology learning. Embodied with a mental imagery, the system also learns by both top-down and bottom-up processes, which are the syntactic structures that are contained in utterances. Thereby, it performs simple grammar learning. Under such a multimodal scheme, the robot is able to describe online a given physical context (both static and dynamic) through natural language expressions. It can also perform actions through verbal interactions with its human partner.

  5. Orbit Refinement of Asteroids and Comets Using a Robotic Telescope Network

    Science.gov (United States)

    Lantz Caughey, Austin; Brown, Johnny; Puckett, Andrew W.; Hoette, Vivian L.; Johnson, Michael; McCarty, Cameron B.; Whitmore, Kevin; UNC-Chapel Hill SKYNET Team

    2016-01-01

    We report on a multi-semester project to refine the orbits of asteroids and comets in our Solar System. One of the newest fields of research for undergraduate Astrophysics students at Columbus State University is that of asteroid astrometry. By measuring the positions of an asteroid in a set of images, we can reduce the overall uncertainty in the accepted orbital parameters of that object. These measurements, using our WestRock Observatory (WRO) and several other telescopes around the world, are being published through the Minor Planet Center (MPC) and benefit the global community.Three different methods are used to obtain these observations. First, we use our own 24-inch telescope at WRO, located in at CSU's Coca-Cola Space Science Center in downtown Columbus, Georgia . Second, we have access to data from the 20-inch telescope at Stone Edge Observatory in El Verano, California. Finally, we may request images remotely using Skynet, an online worldwide network of robotic telescopes. Our primary and long-time collaborator on Skynet has been the "41-inch" reflecting telescope at Yerkes Observatory in Williams Bay, Wisconsin. Thus far, we have used these various telescopes to refine the orbits of more than 15 asteroids and comets. We have also confirmed the resulting reduction in orbit-model uncertainties using Monte Carlo simulations and orbit visualizations, using Find_Orb and OrbitMaster software, respectively.Before any observatory site can be used for official orbit refinement projects, it must first become a trusted source of astrometry data for the MPC. We have therefore obtained Observatory Codes not only for our own WestRock Observatory (W22), but also for 3 Skynet telescopes that we may use in the future: Dark Sky Observatory in Boone, North Carolina (W38) Hume Observatory in Santa Rosa, California (U54) and Athabasca University Geophysical Observatory in Athabasca, Alberta, Canada (U96).

  6. Contextual Student Learning through Authentic Asteroid Research Projects using a Robotic Telescope Network

    Science.gov (United States)

    Hoette, Vivian L.; Puckett, Andrew W.; Linder, Tyler R.; Heatherly, Sue Ann; Rector, Travis A.; Haislip, Joshua B.; Meredith, Kate; Caughey, Austin L.; Brown, Johnny E.; McCarty, Cameron B.; Whitmore, Kevin T.

    2015-11-01

    Skynet is a worldwide robotic telescope network operated by the University of North Carolina at Chapel Hill with active observing sites on 3 continents. The queue-based observation request system is simple enough to be used by middle school students, but powerful enough to supply data for research scientists. The Skynet Junior Scholars program, funded by the NSF, has teamed up with professional astronomers to engage students from middle school to undergraduates in authentic research projects, from target selection through image analysis and publication of results. Asteroid research is a particularly fruitful area for youth collaboration that reinforces STEM education standards and can allow students to make real contributions to scientific knowledge, e.g., orbit refinement through astrometric submissions to the Minor Planet Center. We have created a set of projects for youth to: 1. Image an asteroid, make a movie, and post it to a gallery; 2. Measure the asteroid’s apparent motion using the Afterglow online image processor; and 3. Image asteroids from two or more telescopes simultaneously to demonstrate parallax. The apparent motion and parallax projects allow students to estimate the distance to their asteroid, as if they were the discoverer of a brand new object in the solar system. Older students may take on advanced projects, such as analyzing uncertainties in asteroid orbital parameters; studying impact probabilities of known objects; observing time-sensitive targets such as Near Earth Asteroids; and even discovering brand new objects in the solar system.Images are acquired from among seven Skynet telescopes in North Carolina, California, Wisconsin, Canada, Australia, and Chile, as well as collaborating observatories such as WestRock in Columbus, Georgia; Stone Edge in El Verano, California; and Astronomical Research Institute in Westfield, Illinois.

  7. A system for simulating shared memory in heterogeneous distributed-memory networks with specialization for robotics applications

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.P.; Bangs, A.L.; Butler, P.L.

    1991-01-01

    Hetero Helix is a programming environment which simulates shared memory on a heterogeneous network of distributed-memory computers. The machines in the network may vary with respect to their native operating systems and internal representation of numbers. Hetero Helix presents a simple programming model to developers, and also considers the needs of designers, system integrators, and maintainers. The key software technology underlying Hetero Helix is the use of a compiler'' which analyzes the data structures in shared memory and automatically generates code which translates data representations from the format native to each machine into a common format, and vice versa. The design of Hetero Helix was motivated in particular by the requirements of robotics applications. Hetero Helix has been used successfully in an integration effort involving 27 CPUs in a heterogeneous network and a body of software totaling roughly 100,00 lines of code. 25 refs., 6 figs.

  8. Autonomous military robotics

    CERN Document Server

    Nath, Vishnu

    2014-01-01

    This SpringerBrief reveals the latest techniques in computer vision and machine learning on robots that are designed as accurate and efficient military snipers. Militaries around the world are investigating this technology to simplify the time, cost and safety measures necessary for training human snipers. These robots are developed by combining crucial aspects of computer science research areas including image processing, robotic kinematics and learning algorithms. The authors explain how a new humanoid robot, the iCub, uses high-speed cameras and computer vision algorithms to track the objec

  9. Consensus Formation Control for a Class of Networked Multiple Mobile Robot Systems

    Directory of Open Access Journals (Sweden)

    Long Sheng

    2012-01-01

    for investigating the sufficient conditions to linear control gain design for the system with constant time delays. Simulation results as well as experimental studies on Pioneer 3 series mobile robots are shown to verify the effectiveness of the proposed approach.

  10. Wireless robot teleoperation via internet using IPv6 over a bluetooth personal area network

    Directory of Open Access Journals (Sweden)

    Carlos Araque Rodríguez

    2010-01-01

    Full Text Available En este artículo se presenta el diseño, construcción y pruebas de un sistema que permite la manipulación y visualización del robot Microbot Teachmover usando una conexión inalámbrica Bluetooth con dirección IPv6, brindando la posibilidad de manejar el robot desde diferentes escenarios: desde un dispositivo móvil que se encuentra en la misma piconet del robot; desde un computador que se encuentre en la misma piconet del robot y desde un computador que se encuentre conectado a Internet con una dirección IPv6.

  11. Intelligent navigation and accurate positioning of an assist robot in indoor environments

    Science.gov (United States)

    Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke

    2017-12-01

    Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.

  12. Fusion of Multi-Vision of Industrial Robot in MAS-Based Smart Space

    Directory of Open Access Journals (Sweden)

    Li Hexi

    2015-01-01

    Full Text Available The paper presents a fusion method of muti-vision of industrial robot in a smart space based on multi-agent system(MAS, the robotic multi-vision consists of top-view, side-view, front-view and hand-eye cameras, the moving hand-eye provide vision guidance and give the estimation of robot position, other three cameras are used for target recognition and positioning. Each camera is connected to an agent based on an image-processing computer that aims at analyzing image rapidly and satisfying the real-time requirement of data processing. As a learning strategy of robotic vision, a back-propagation neural network(BPNN with 3-layer-architecture is first constructed for each agent and is independently trained as a classifier of target recognition using batch gradient descent method based on the region features extracted from the images of target samples(typical mechanical parts, and then the outputs of trained BPNNs in MAS-based smart space are fused with Dempster-Shafer evidence theory to form a final recognition decision, the experimental results of typical mechanical parts show that fusion of multi-vision can improve the robotic vision accuracy and MAS-based smart space will contribute to the parallel processing of immense image data in robotic multi-vision system.

  13. A cellular mechanism for multi-robot construction via evolutionary multi-objective optimization of a gene regulatory network.

    Science.gov (United States)

    Guo, Hongliang; Meng, Yan; Jin, Yaochu

    2009-12-01

    A major research challenge of multi-robot systems is to predict the emerging behaviors from the local interactions of the individual agents. Biological systems can generate robust and complex behaviors through relatively simple local interactions in a world characterized by rapid changes, high uncertainty, infinite richness, and limited availability of information. Gene Regulatory Networks (GRNs) play a central role in understanding natural evolution and development of biological organisms from cells. In this paper, inspired by biological organisms, we propose a distributed GRN-based algorithm for a multi-robot construction task. Through this algorithm, multiple robots can self-organize autonomously into different predefined shapes, and self-reorganize adaptively under dynamic environments. This developmental process is evolved using a multi-objective optimization algorithm to achieve a shorter travel distance and less convergence time. Furthermore, a theoretical proof of the system's convergence is also provided. Various case studies have been conducted in the simulation, and the results show the efficiency and convergence of the proposed method.

  14. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  15. Evaluation of a wearable tele-echography robot system: FASTele in a vehicle using a mobile network.

    Science.gov (United States)

    Ito, Keiichiro; Tsuruta, Koichi; Sugano, Shigeki; Iwata, Hiroyasu

    2011-01-01

    This paper shows the focused assessment with sonography for trauma (FAST) performance of a wearable tele-echography robot system we have developed that we call "FASTele". FAST is a first-step way of assessing the injury severity of patients suffering from internal bleeding who may be some time away from hospital treatment. So far, we have only verified our system's effectiveness under constantly wired network conditions. To determine its FAST performance within an emergency vehicle, we extended it to a WiMAX mobile network and performed experiments on it. Experiment results showed that paramedics could attach the system to FAST areas on a patient's body on the basis of the attaching position and procedure. We also assessed echo images to confirm that the system is able to extract the echo images required for FAST under maximum vehicle acceleration.

  16. An adaptive PID like controller using mix locally recurrent neural network for robotic manipulator with variable payload.

    Science.gov (United States)

    Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P

    2016-05-01

    Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Distributed, collaborative human-robotic networks for outdoor experiments in search, identify and track

    Science.gov (United States)

    Lee, Daniel; McClelland, Mark; Schneider, Joseph; Yang, Tsung-Lin; Gallagher, Dan; Wang, John; Shah, Danelle; Ahmed, Nisar; Moran, Pete; Jones, Brandon; Leung, Tung-Sing; Nathan, Aaron; Kress-Gazit, Hadas; Campbell, Mark

    2010-10-01

    This paper presents an overview of a human-robotic system under development at Cornell which is capable of mapping an unknown environment, as well as discovering, tracking, and neutralizing several static and dynamic objects of interest. In addition, the robots can coordinate their individual tasks with one another without overly burdening a human operator. The testbed utilizes the Segway RMP platform, with lidar, vision, IMU and GPS sensors. The software draws from autonomous systems research, specifically in the areas of pose estimation, target detection and tracking, motion and behavioral planning, and human robot interaction. This paper also details experimental scenarios of mapping, tracking, and neutralization presented by way of pictures, data, and movies.

  18. Improving social odometry robot networks with distributed reputation systems for collaborative purposes.

    Science.gov (United States)

    Fraga, David; Gutiérrez, Alvaro; Vallejo, Juan Carlos; Campo, Alexandre; Bankovic, Zorana

    2011-01-01

    The improvement of odometry systems in collaborative robotics remains an important challenge for several applications. Social odometry is a social technique which confers the robots the possibility to learn from the others. This paper analyzes social odometry and proposes and follows a methodology to improve its behavior based on cooperative reputation systems. We also provide a reference implementation that allows us to compare the performance of the proposed solution in highly dynamic environments with the performance of standard social odometry techniques. Simulation results quantitatively show the benefits of this collaborative approach that allows us to achieve better performances than social odometry.

  19. Improving Social Odometry Robot Networks with Distributed Reputation Systems for Collaborative Purposes

    Directory of Open Access Journals (Sweden)

    Zorana Bankovic

    2011-11-01

    Full Text Available The improvement of odometry systems in collaborative robotics remains an important challenge for several applications. Social odometry is a social technique which confers the robots the possibility to learn from the others. This paper analyzes social odometry and proposes and follows a methodology to improve its behavior based on cooperative reputation systems. We also provide a reference implementation that allows us to compare the performance of the proposed solution in highly dynamic environments with the performance of standard social odometry techniques. Simulation results quantitatively show the benefits of this collaborative approach that allows us to achieve better performances than social odometry.

  20. Robot-assisted general surgery.

    Science.gov (United States)

    Hazey, Jeffrey W; Melvin, W Scott

    2004-06-01

    With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.

  1. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    Science.gov (United States)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  2. Infrared Camera

    Science.gov (United States)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  3. Designing the optimal robotic milking barn by applying a queuing network approach

    NARCIS (Netherlands)

    Halachmi, I.; Adan, I.J.B.F.; Wald, van der J.; Beek, van P.; Heesterbeek, J.A.P.

    2003-01-01

    The design of various conventional dairy barns is based on centuries of experience, but there is hardly any experience with robotic milking barns (RMB). Furthermore, as each farmer has his own management practices, the optimal layout is `site dependent¿. A new universally applicable design

  4. Designing the optimal robotic milking barn: applying a queuing network approach

    NARCIS (Netherlands)

    Halachmi, I.; Adan, I.J.B.F.; Wal, J. van der; Beek, P. van; Heesterbeek, J.A.P.

    2003-01-01

    The design of various conventional dairy barns is based on centuries of experience, but there is hardly any experience with robotic milking barns (RMB). Furthermore, as each farmer has his own management practices, the optimal layout is 'site dependent'. A new universally applicable design

  5. Lunar Reconnaissance Orbiter Camera Observations Relating to Science and Landing Site Selection in South Pole-Aitken Basin for a Robotic Sample Return Mission

    Science.gov (United States)

    Jolliff, B. L.; Clegg-Watkins, R. N.; Petro, N. E.; Lawrence, S. J.

    2016-12-01

    The Moon's South Pole-Aitken basin (SPA) is a high priority target for Solar System exploration, and sample return from SPA is a specific objective in NASA's New Frontiers program. Samples returned from SPA will improve our understanding of early lunar and Solar System events, mainly by placing firm timing constraints on SPA formation and post-SPA late-heavy bombardment (LHB). Lunar Reconnaissance Orbiter Camera (LROC) images and topographic data, especially Narrow Angle Camera (NAC) scale (1-3 mpp) morphology and digital terrain model (DTM) data are critical for selecting landing sites and assessing landing hazards. Rock components in regolith at a given landing site should include (1) original SPA impact-melt rocks and breccia (to determine the age of the impact event and what materials were incorporated into the melt); (2) impact-melt rocks and breccia from large craters and basins (other than SPA) that represent the post-SPA LHB interval; (3) volcanic basalts derived from the sub-SPA mantle; and (4) older, "cryptomare" (ancient buried volcanics excavated by impact craters, to determine the volcanic history of SPA basin). All of these rock types are sought for sample return. The ancient SPA-derived impact-melt rocks and later-formed melt rocks are needed to determine chronology, and thus address questions of early Solar System dynamics, lunar history, and effects of giant impacts. Surface compositions from remote sensing are consistent with mixtures of SPA impactite and volcanic materials, and near infrared spectral data distinguish areas with variable volcanic contents vs. excavated SPA substrate. Estimating proportions of these rock types in the regolith requires knowledge of the surface deposits, evaluated via morphology, slopes, and terrain ruggedness. These data allow determination of mare-cryptomare-nonmare deposit interfaces in combination with compositional and mineralogical remote sensing to establish the types and relative proportions of materials

  6. Lunar Reconnaissance Orbiter Camera Observations Relating to Science and Landing Site Selection in South Pole-Aitken Basin for a Robotic Sample Return Mission

    Science.gov (United States)

    Jolliff, B. L.; Clegg-Watkins, R. N.; Petro, N. E.; Lawrence, S. L.

    2016-01-01

    The Moon's South Pole-Aitken basin (SPA) is a high priority target for Solar System exploration, and sample return from SPA is a specific objective in NASA's New Frontiers program. Samples returned from SPA will improve our understanding of early lunar and Solar System events, mainly by placing firm timing constraints on SPA formation and the post-SPA late-heavy bombardment (LHB). Lunar Reconnaissance Orbiter Camera (LROC) images and topographic data, especially Narrow Angle Camera (NAC) scale (1-3 mpp) morphology and digital terrain model (DTM) data are critical for selecting landing sites and assessing landing hazards. Rock components in regolith at a given landing site should include (1) original SPA impact-melt rocks and breccia (to determine the age of the impact event and what materials were incorporated into the melt); (2) impact-melt rocks and breccia from large craters and basins (other than SPA) that represent the post-SPA LHB interval; (3) volcanic basalts derived from the sub-SPA mantle; and (4) older, "cryptomare" (ancient buried volcanics excavated by impact craters, to determine the volcanic history of SPA basin). All of these rock types are sought for sample return. The ancient SPA-derived impact-melt rocks and later-formed melt rocks are needed to determine chronology, and thus address questions of early Solar System dynamics, lunar history, and effects of giant impacts. Surface compositions from remote sensing are consistent with mixtures of SPA impactite and volcanic materials, and near infrared spectral data distinguish areas with variable volcanic contents vs. excavated SPA substrate. Estimating proportions of these rock types in the regolith requires knowledge of the surface deposits, evaluated via morphology, slopes, and terrain ruggedness. These data allow determination of mare-cryptomare-nonmare deposit interfaces in combination with compositional and mineralogical remote sensing to establish the types and relative proportions of materials

  7. Autonomous Robotic Inspection in Tunnels

    Science.gov (United States)

    Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.

    2016-06-01

    In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.

  8. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  9. Accuracy in Robot Generated Image Data Sets

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Bjorholm

    2015-01-01

    In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... in using robots for image data set generation....

  10. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  11. Robotic surgery

    Science.gov (United States)

    Robot-assisted surgery; Robotic-assisted laparoscopic surgery; Laparoscopic surgery with robotic assistance ... computer station and directs the movements of a robot. Small surgical tools are attached to the robot's ...

  12. Geometric Calibration of Full Spherical Panoramic Ricoh-Theta Camera

    Science.gov (United States)

    Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I.

    2017-05-01

    A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.

  13. Hardware platform for multiple mobile robots

    Science.gov (United States)

    Parzhuber, Otto; Dolinsky, D.

    2004-12-01

    This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.

  14. An Improved Indoor Positioning System Using RGB-D Cameras and Wireless Networks for Use in Complex Environments.

    Science.gov (United States)

    Duque Domingo, Jaime; Cerrada, Carlos; Valero, Enrique; Cerrada, Jose A

    2017-10-20

    This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps, delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.

  15. An Improved Indoor Positioning System Using RGB-D Cameras and Wireless Networks for Use in Complex Environments

    Directory of Open Access Journals (Sweden)

    Jaime Duque Domingo

    2017-10-01

    Full Text Available This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps, delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.

  16. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  17. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  18. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  19. NASA's International Lunar Network Anchor Nodes and Robotic Lunar Lander Project Update

    Science.gov (United States)

    Cohen, Barbara A.; Bassler, Julie A.; Ballard, Benjamin; Chavers, Greg; Eng, Doug S.; Hammond, Monica S.; Hill, Larry A.; Harris, Danny W.; Hollaway, Todd A.; Kubota, Sanae; hide

    2010-01-01

    NASA Marshall Space Flight Center and The Johns Hopkins University Applied Physics Laboratory have been conducting mission studies and performing risk reduction activities for NASA's robotic lunar lander flight projects. Additional mission studies have been conducted to support other objectives of the lunar science and exploration community and extensive risk reduction design and testing has been performed to advance the design of the lander system and reduce development risk for flight projects.

  20. A new technique in mobile robot simultaneous localization and mapping

    National Research Council Canada - National Science Library

    Vivek Anand Sujan; Marco Antonio Meggiolaro; Felipe Augusto Weilemann Belo

    2006-01-01

    .... In this algorithm, the information content present in sub-regions of a 2-D panoramic image of the environment is determined from the robot's current location using a single camera fixed on the mobile robot...

  1. A new approach to investigate an eruptive paroxysmal sequence using camera and strainmeter networks: Lessons from the 3-5 December 2015 activity at Etna volcano

    Science.gov (United States)

    Bonaccorso, A.; Calvari, S.

    2017-10-01

    Explosive sequences are quite common at basaltic and andesitic volcanoes worldwide. Studies aimed at short-term forecasting are usually based on seismic and ground deformation measurements, which can be used to constrain the source region and quantify the magma volume involved in the eruptive process. However, during single episodes of explosive sequences, integration of camera remote sensing and geophysical data are scant in literature, and the total volume of pyroclastic products is not determined. In this study, we calculate eruption parameters for four powerful lava fountains occurring at the main and oldest Mt. Etna summit crater, Voragine, between 3 and 5 December 2015. These episodes produced impressive eruptive columns and plume clouds, causing lapilli and ash fallout to more than 100 km away. We analyse these paroxysmal events by integrating the images recorded by a network of monitoring cameras and the signals from three high-precision borehole strainmeters. From the camera images we calculated the total erupted volume of fluids (gas plus pyroclastics), inferring amounts from 1.9 ×109 m3 (first event) to 0.86 ×109 m3 (third event). Strain changes recorded during the first and most powerful event were used to constrain the depth of the source. The ratios of strain changes recorded at two stations during the four lava fountains were used to constrain the pyroclastic fraction for each eruptive event. The results revealed that the explosive sequence was characterized by a decreasing trend of erupted pyroclastics with time, going from 41% (first event) to 13% (fourth event) of the total erupted pyroclastic volume. Moreover, the volume ratio fluid/pyroclastic decreased markedly in the fourth and last event. To the best of our knowledge, this is the first time ever that erupted volumes of both fluid and pyroclastics have been estimated for an explosive sequence from a monitoring system using permanent cameras and high precision strainmeters. During future

  2. MVACS Robotic Arm

    Science.gov (United States)

    Bonitz, R.; Slostad, J.; Bon, B.; Braun, D.; Brill, R.; Buck, C.; Fleischner, R.; Haldeman, A.; Herman, J.; Hertzel, M.; hide

    2000-01-01

    The primary purpose of the Mars Volatiles and Climate Surveyor (MVACS) Robotic Arm is to support to the other MVACS science instruments by digging trenches in the Martian soil; acquiring and dumping soil samples into the thermal evolved gas analyzer (TEGA); positioning the Soil Temperature Probe (STP) in the soil: positioning the Robotic Arm Air Temperature Sensor (RAATS) at various heights above the surface, and positioning the Robotic Arm Camera (RAC) for taking images of the surface, trench, soil samples, magnetic targets and other objects of scientific interest within its workspace.

  3. Hand/Eye Coordination For Fine Robotic Motion

    Science.gov (United States)

    Lokshin, Anatole M.

    1992-01-01

    Fine motions of robotic manipulator controlled with help of visual feedback by new method reducing position errors by order of magnitude. Robotic vision subsystem includes five cameras: three stationary ones providing wide-angle views of workspace and two mounted on wrist of auxiliary robot arm. Stereoscopic cameras on arm give close-up views of object and end effector. Cameras measure errors between commanded and actual positions and/or provide data for mapping between visual and manipulator-joint-angle coordinates.

  4. Adaptive Task-Space Cooperative Tracking Control of Networked Robotic Manipulators Without Task-Space Velocity Measurements.

    Science.gov (United States)

    Liang, Xinwu; Wang, Hesheng; Liu, Yun-Hui; Chen, Weidong; Hu, Guoqiang; Zhao, Jie

    2016-10-01

    In this paper, the task-space cooperative tracking control problem of networked robotic manipulators without task-space velocity measurements is addressed. To overcome the problem without task-space velocity measurements, a novel task-space position observer is designed to update the estimated task-space position and to simultaneously provide the estimated task-space velocity, based on which an adaptive cooperative tracking controller without task-space velocity measurements is presented by introducing new estimated task-space reference velocity and acceleration. Furthermore, adaptive laws are provided to cope with uncertain kinematics and dynamics and rigorous stability analysis is given to show asymptotical convergence of the task-space tracking and synchronization errors in the presence of communication delays under strongly connected directed graphs. Simulation results are given to demonstrate the performance of the proposed approach.

  5. A Proposal for the Development of a Robot-Based Physical Distribution and Transportation Network for Urban Environments

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Brooks, Andrew G.

    2010-01-01

    to the consideration of specialized systems for realizing quantum leaps in efficiency. We therefore propose a novel variant of an automated transportation system: a dedicated network of transportation robots for delivering small-to-medium scale physical objects within the range of commuter automobiles for use......The personal automobile ushered in a renaissance in individual freedom of movement. This general-purpose vehicle is capable of fulfilling almost all of the local and regional transportation needs of the average citizen. It can commute the owner to work and leisure, ferry passengers, deliver...... packages and groceries, and in many cases can even haul other modes of transportation like bicycles, snowmobiles, trailers and boats. However, like any general-purpose system, serious inefficiencies must be tolerated in individual tasks in order to provide such a wide breadth of capabilities...

  6. Beating the cost curve and redefining the scientific telescope utility using 0.4-meter robotic cluster network

    Science.gov (United States)

    Dubberley, Matthew A.; Walker, Zachary A.; Haldeman, Benjamin J.

    2008-07-01

    Las Cumbres Observatory Global Telescope (LCOGT) is redefining the function of robotic telescopes by deploying 0.4 meter telescopes that act as a highly networked intelligent instrument. The 0.4 meter telescopes, (P4) are optimized for quick and accurate object acquisition and tracking. This minimizes response time and enables the leveraging of the instrument. A single P4 can independently execute multiple science programs concurrently or team up with other P4s for deeper or multi-color observations of a single target. The intelligent control software will optimize the observation schedule for each individual telescope and the entire network. LCOGT is deploying 6 networked clusters consisting of four P4s around the world, providing capacity and versatility beyond the classical observatory. Each P4 has zero slippage, no backlash friction systems, and is currently achieving 20 deg/s slewing. Blind pointing is currently 8 arcsec RMS. Using the AG acquisition routine, the drive will have repeatable pointing to within 0.6 arcsec within 12 seconds from anywhere on the sky. Other features include wind buffet correction, rapid thermalization, dual autoguiders, novel scanning flat fielding device, large 20 kg instrument capacity, high speed instrument changer, and a stiff split ring mount.

  7. Deep learning with convolutional neural networks: a resource for the control of robotic prosthetic hands via electromyography

    Directory of Open Access Journals (Sweden)

    Manfredo Atzori

    2016-09-01

    Full Text Available Motivation: Natural control methods based on surface electromyography and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications and commercial prostheses are in the best case capable to offer natural control for only a few movements. Objective: In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its capabilities for the natural control of robotic hands via surface electromyography by providing a baseline on a large number of intact and amputated subjects. Methods: We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 hand amputated subjects. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets.Results: The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods but lower than the results obtained with the best reference methods in our tests. Significance: The results show that convolutional neural networks with a very simple architecture can produce accuracy comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters can be fundamental for the analysis of surface electromyography data. Finally, the results suggest that deeper and more complex networks may increase dexterous control robustness, thus contributing to bridge the gap between the market and scientific research

  8. Development Of A Mobile Robot As A Test Bed For Tele-Presentation

    Directory of Open Access Journals (Sweden)

    Diogenes Armando D. Pascua

    2016-01-01

    Full Text Available In this paper a human-sized tracked wheel robot with a large payload capacity for tele-presentation is presented. The robot is equipped with different sensors for obstacle avoidance and localization. A high definition web camera installed atop a pan and tilt assembly was in place as a remote environment feedback for users. An LCD monitor provides the visual display of the operator in the remote environment using the standard Skype teleconferencing software. Remote control was done via the internet through the free Teamviewer VNC remote desktop software. Moreover, this paper presents the design details, fabrication and evaluation of individual components. Core mobile robot movement and navigational controls were developed and tested. The effectiveness of the mobile robot as a test bed for tele-presentation were evaluated and analyzed by way of its real time response and time delay effects of the network.

  9. Development of a Mobile Robot as a Test Bed for Tele-Presentation

    Directory of Open Access Journals (Sweden)

    Diogenes Armando D. Pascua

    2016-05-01

    Full Text Available In this paper a human-sized tracked wheel robot with a large payload capacity for tele-presentation is presented. The robot is equipped with different sensors for obstacle avoidance and localization. A high definition web camera installed atop a pan and tilt assembly was in place as a remote environment feedback for users. An LCD monitor provides the visual display of the operator in the remote environment using the standard Skype teleconferencing software. Remote control was done via the internet through the free Teamviewer VNC remote desktop software. Moreover, this paper presents the design details, fabrication and evaluation of individual components. Core mobile robot movement and navigational controls were developed and tested. The effectiveness of the mobile robot as a test bed for tele-presentation were evaluated and analyzed by way of its real time response and time delay effects of the network

  10. Supervised Autonomy for Exploration and Mobile Manipulation in Rough Terrain with a Centaur-like Robot

    Directory of Open Access Journals (Sweden)

    Max Schwarz

    2016-10-01

    Full Text Available Planetary exploration scenarios illustrate the need for autonomous robots that are capable to operate in unknown environments without direct human interaction. At the DARPA Robotics Challenge, we demonstrated that our Centaur-like mobile manipulation robot Momaro can solve complex tasks when teleoperated. Motivated by the DLR SpaceBot Cup 2015, where robots should explore a Mars-like environment, find and transport objects, take a soil sample, and perform assembly tasks, we developed autonomous capabilities for Momaro. Our robot perceives and maps previously unknown, uneven terrain using a 3D laser scanner. Based on the generated height map, we assess drivability, plan navigation paths, and execute them using the omnidirectional drive. Using its four legs, the robot adapts to the slope of the terrain. Momaro perceives objects with cameras, estimates their pose, and manipulates them with its two arms autonomously. For specifying missions, monitoring mission progress, on-the-fly reconfiguration, and teleoperation, we developed a ground station with suitable operator interfaces. To handle network communication interruptions and latencies between robot and ground station, we implemented a robust network layer for the ROS middleware. With the developed system, our team NimbRo Explorer solved all tasks of the DLR SpaceBot Camp 2015. We also discuss the lessons learned from this demonstration.

  11. Teleautonomous Control on Rescue Robot Prototype

    Directory of Open Access Journals (Sweden)

    Son Kuswadi

    2012-12-01

    Full Text Available Robot application in disaster area can help responder team to save victims. In order to finish task, robot must have flexible movement mechanism so it can pass through uncluttered area. Passive linkage can be used on robot chassis so it can give robot flexibility. On physical experiments, robot is succeeded to move through gravels and 5 cm obstacle. Rescue robot also has specialized control needs. Robot must able to be controlled remotely. It also must have ability to move autonomously. Teleautonomous control method is combination between those methods. It can be concluded from experiments that on teleoperation mode, operator must get used to see environment through robot’s camera. While on autonomous mode, robot is succeeded to avoid obstacle and search target based on sensor reading and controller program. On teleautonomous mode, robot can change control mode by using bluetooth communication for data transfer, so robot control will be more flexible.

  12. Multi-camera 3D Object Reconstruction for Industrial Automation

    OpenAIRE

    Bitzidou, Malamati; Chrysostomou, Dimitrios; Gasteratos, Antonios

    2012-01-01

    Part 2: Design, Manufacturing and Production Management; International audience; In this paper, a method to automate industrial manufacturing processes using an intelligent multi-camera system to assist a robotic arm on a production line is presented. The examined assembly procedure employs a volumetric method for the initial estimation of object’s properties and an octree decomposition process to generate the path plans for the robotic arm. Initially, the object is captured by four cameras a...

  13. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  14. Lunar optical wireless communication and navigation network for robotic and human exploration

    Science.gov (United States)

    Arnon, Shlomi

    2011-09-01

    Exploration of the moon is a stepping stone for further research of our solar system, the galaxy and, ultimately, the universe. Many intriguing questions arise regarding the moon; what is the moon's composition and structure, what is the potential for settlement or colonization and how did our solar system evolve to name a few. New technologies are required in order to answer these questions. The main goal in our project is to develop technologies for optical wireless communication and navigation systems for use in robotic and in human exploration on the moon. These technologies facilitate the exploration of the moon surface by enabling placing scientific equipment at precise locations and subsequently transferring the acquired information at high data rates. The main advantages of optical technology in comparison with RF technology are: a) high data rate transmission, b) small size and weight of equipment, c) low power consumption, d) very high accuracy in measuring range and orientation and e) no contamination of the quiet electromagnetic (EM) environment on the dark side of the moon In this paper we present a mathematical model and an engineering implementation of a system that simultaneously communicates, and measures the location and orientation of a remote robot on the moon.

  15. Robotic vehicle uses acoustic array for detection and localization in urban environments

    Science.gov (United States)

    Young, Stuart H.; Scanlon, Michael V.

    2001-09-01

    Sophisticated robotic platforms with diverse sensor suites are quickly replacing the eyes and ears of soldiers on the complex battlefield. The Army Research Laboratory (ARL) in Adelphi, Maryland has developed a robot-based acoustic detection system that will detect an impulsive noise event, such as a sniper's weapon firing or door slam, and activate a pan-tilt to orient a visible and infrared camera toward the detected sound. Once the cameras are cued to the target, onboard image processing can then track the target and/or transmit the imagery to a remote operator for navigation, situational awareness, and target detection. Such a vehicle can provide reconnaissance, surveillance, and target acquisition for soldiers, law enforcement, and rescue personnel, and remove these people from hazardous environments. ARL's primary robotic platforms contain 16-in. diameter, eight-element acoustic arrays. Additionally, a 9- in. array is being developed in support of DARPA's Tactical Mobile Robot program. The robots have been tested in both urban and open terrain. The current acoustic processing algorithm has been optimized to detect the muzzle blast from a sniper's weapon, and reject many interfering noise sources such as wind gusts, generators, and self-noise. However, other detection algorithms for speech and vehicle detection/tracking are being developed for implementation on this and smaller robotic platforms. The collaboration between two robots, both with known positions and orientations, can provide useful triangulation information for more precise localization of the acoustic events. These robots can be mobile sensor nodes in a larger, more expansive, sensor network that may include stationary ground sensors, UAVs, and other command and control assets. This report will document the performance of the robot's acoustic localization, describe the algorithm, and outline future work.

  16. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  17. Robot maps, robot moves, robot avoids

    OpenAIRE

    Farrugia, Claire

    2014-01-01

    Robotics is a cornerstone for this century’s innovations. From robot nurses to your own personal assistant, most robots need to know: ‘where is it?’ ‘Where should it go?’ And ‘how to get there?’ Without answers to these questions a robot cannot do much. http://www.um.edu.mt/think/robot-maps-robot-moves-robot-avoids/

  18. Intelligent Control of Welding Gun Pose for Pipeline Welding Robot Based on Improved Radial Basis Function Network and Expert System

    Directory of Open Access Journals (Sweden)

    Jingwen Tian

    2013-02-01

    Full Text Available Since the control system of the welding gun pose in whole-position welding is complicated and nonlinear, an intelligent control system of welding gun pose for a pipeline welding robot based on an improved radial basis function neural network (IRBFNN and expert system (ES is presented in this paper. The structure of the IRBFNN is constructed and the improved genetic algorithm is adopted to optimize the network structure. This control system makes full use of the characteristics of the IRBFNN and the ES. The ADXRS300 micro-mechanical gyro is used as the welding gun position sensor in this system. When the welding gun position is obtained, an appropriate pitch angle can be obtained through expert knowledge and the numeric reasoning capacity of the IRBFNN. ARM is used as the controller to drive the welding gun pitch angle step motor in order to adjust the pitch angle of the welding gun in real-time. The experiment results show that the intelligent control system of the welding gun pose using the IRBFNN and expert system is feasible and it enhances the welding quality. This system has wide prospects for application.

  19. Using strategic movement to calibrate a neural compass: a spiking network for tracking head direction in rats and robots.

    Directory of Open Access Journals (Sweden)

    Peter Stratton

    Full Text Available The head direction (HD system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that 'grounding' of modelled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology, and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.

  20. An omnidirectional camera simulation for the USARSim world

    NARCIS (Netherlands)

    Schmits, T.; Visser, A.

    2009-01-01

    Omnidirectional vision is currently an important sensor in robotic research. The catadioptric omnidirectional camera with a hyperbolic convex mirror is a common omnidirectional vision system in the robotics research field as it has many advantages over other vision systems. This paper describes the

  1. Spherical Camera

    Science.gov (United States)

    1997-01-01

    Developed largely through a Small Business Innovation Research contract through Langley Research Center, Interactive Picture Corporation's IPIX technology provides spherical photography, a panoramic 360-degrees. NASA found the technology appropriate for use in guiding space robots, in the space shuttle and space station programs, as well as research in cryogenic wind tunnels and for remote docking of spacecraft. Images of any location are captured in their entirety in a 360-degree immersive digital representation. The viewer can navigate to any desired direction within the image. Several car manufacturers already use IPIX to give viewers a look at their latest line-up of automobiles. Another application is for non-invasive surgeries. By using OmniScope, surgeons can look more closely at various parts of an organ with medical viewing instruments now in use. Potential applications of IPIX technology include viewing of homes for sale, hotel accommodations, museum sites, news events, and sports stadiums.

  2. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  3. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  4. An industrial robot singular trajectories planning based on graphs and neural networks

    Science.gov (United States)

    Łęgowski, Adrian; Niezabitowski, Michał

    2016-06-01

    Singular trajectories are rarely used because of issues during realization. A method of planning trajectories for given set of points in task space with use of graphs and neural networks is presented. In every desired point the inverse kinematics problem is solved in order to derive all possible solutions. A graph of solutions is made. The shortest path is determined to define required nodes in joint space. Neural networks are used to define the path between these nodes.

  5. Robot Actors, Robot Dramaturgies

    DEFF Research Database (Denmark)

    Jochum, Elizabeth

    This paper considers the use of tele-operated robots in live performance. Robots and performance have long been linked, from the working androids and automata staged in popular exhibitions during the nineteenth century and the robots featured at Cybernetic Serendipity (1968) and the World Expo...... discourse shapes how we perceive and use technology and also points to the ways in which emerging technologies “refashion our experience of space, time and human being filter through our art works, dreams and fantasies.” This paper considers a survey of robot dramaturgies to demonstrate how performance both...... shapes and reinforces popular awareness and misconceptions of robots. Flyvende Grise’s The Future (2013), Amit Drori’s Savanna (2010), Global Creatures’ King Kong (2013) and Louis Philip Demers’ Blind Robot (2013) each utilize tele-operated robots across a wide range of human and animal morphologies...

  6. Face feature processor on mobile service robot

    Science.gov (United States)

    Ahn, Ho Seok; Park, Myoung Soo; Na, Jin Hee; Choi, Jin Young

    2005-12-01

    In recent years, many mobile service robots have been developed. These robots are different from industrial robots. Service robots were confronted to unexpected changes in the human environment. So many capabilities were needed to service mobile robot, for example, the capability to recognize people's face and voice, the capability to understand people's conversation, and the capability to express the robot's thinking etc. This research considered face detection, face tracking and face recognition from continuous camera image. For face detection module, it used CBCH algorithm using openCV library from Intel Corporation. For face tracking module, it used the fuzzy controller to control the pan-tilt camera movement smoothly with face detection result. A PCA-FX, which adds class information to PCA, was used for face recognition module. These three procedures were called face feature processor, which were implemented on mobile service robot OMR to verify.

  7. Rapid 3D Modeling and Parts Recognition on Automotive Vehicles Using a Network of RGB-D Sensors for Robot Guidance

    OpenAIRE

    Alberto Chávez-Aragón; Rizwan Macknojia; Pierre Payeur; Robert Laganière

    2013-01-01

    This paper presents an approach for the automatic detection and fast 3D profiling of lateral body panels of vehicles. The work introduces a method to integrate raw streams from depth sensors in the task of 3D profiling and reconstruction and a methodology for the extrinsic calibration of a network of Kinect sensors. This sensing framework is intended for rapidly providing a robot with enough spatial information to interact with automobile panels using various tools. When a vehicle is position...

  8. Low Noise Camera for Suborbital Science Applications

    Science.gov (United States)

    Hyde, David; Robertson, Bryan; Holloway, Todd

    2015-01-01

    Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.

  9. Performances of Observability Indices for Industrial Robot Calibration

    Science.gov (United States)

    2016-12-01

    the most appropriate index for all the studied robots . Moreover, real-world experiments using a 6-DOF serial robot (Fanuc LR Mate 200ic) were...Kinematic calibration of a five-bar planar parallel robot using all working modes,” Robotics and Computer-Integrated Manufacturing , vol. 29, pp. 15–25...based measurement system with a single camera,” Robotics and Computer Integrated Manufacturing , vol. 17, pp. 487–497, 2001. [9] A. Watanabe, S

  10. One solution to recognition of artistic pictures for guide robots by using artificial neural networks

    Directory of Open Access Journals (Sweden)

    Lukić Luka

    2009-01-01

    Full Text Available In this paper is presented one solution to efficient, robust and cheap recognition of artistic pictures on the walls of museums and exhibit halls that reveals satisfactory measure of universality in order to be applied in the areas of trade, process industry, quality control, etc. This solution can be used in a wide range of applications where there is a demand of classifying objects on basis of their visual properties in a large number of existing classes. Here is proposed a method of selective grouping of pattern vectors as training sets for classifiers (artificial neural networks in this case, providing a smaller number of hidden layers in networks, achieving more precise performances and significantly expanding a number of classes to be classified. Selection approach is used in the very classification as well - neural networks are fed with input pattern vectors chosen from subsets determined by additional coefficients. .

  11. A Vision-Based Emergency Response System with a Paramedic Mobile Robot

    Science.gov (United States)

    Jeong, Il-Woong; Choi, Jin; Cho, Kyusung; Seo, Yong-Ho; Yang, Hyun Seung

    Detecting emergency situation is very important to a surveillance system for people like elderly live alone. A vision-based emergency response system with a paramedic mobile robot is presented in this paper. The proposed system is consisted of a vision-based emergency detection system and a mobile robot as a paramedic. A vision-based emergency detection system detects emergency by tracking people and detecting their actions from image sequences acquired by single surveillance camera. In order to recognize human actions, interest regions are segmented from the background using blob extraction method and tracked continuously using generic model. Then a MHI (Motion History Image) for a tracked person is constructed by silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. When an emergency is detected, a mobile robot can help to diagnose the status of the person in the situation. To send the mobile robot to the proper position, we implement mobile robot navigation algorithm based on the distance between the person and a mobile robot. We validate our system by showing emergency detection rate and emergency response demonstration using the mobile robot.

  12. The Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, J.; Thiessen, D.; Pourangi, A.; Kobzeff, P.; Litwin, T.; Scherr, L.; Elliott, S.; Dingizian, A.; Maimone, M.

    2012-09-01

    NASA's Mars Science Laboratory (MSL) Rover is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover cameras described in Maki et al. (J. Geophys. Res. 108(E12): 8071, 2003). Images returned from the engineering cameras will be used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The Navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The Hazard Avoidance Cameras (Hazcams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a 1024×1024 pixel detector and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer "A" and the other set is connected to rover computer "B". The Navcams and Front Hazcams each provide similar views from either computer. The Rear Hazcams provide different views from the two computers due to the different mounting locations of the "A" and "B" Rear Hazcams. This paper provides a brief description of the engineering camera properties, the locations of the cameras on the vehicle, and camera usage for surface operations.

  13. Characterization of the Series 1000 Camera System

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  14. Robot and robot system

    Science.gov (United States)

    Behar, Alberto E. (Inventor); Marzwell, Neville I. (Inventor); Wall, Jonathan N. (Inventor); Poole, Michael D. (Inventor)

    2011-01-01

    A robot and robot system that are capable of functioning in a zero-gravity environment are provided. The robot can include a body having a longitudinal axis and having a control unit and a power source. The robot can include a first leg pair including a first leg and a second leg. Each leg of the first leg pair can be pivotally attached to the body and constrained to pivot in a first leg pair plane that is substantially perpendicular to the longitudinal axis of the body.

  15. A seismic-network mission proposal as an example for modular robotic lunar exploration missions

    Science.gov (United States)

    Lange, C.; Witte, L.; Rosta, R.; Sohl, F.; Heffels, A.; Knapmeyer, M.

    2017-05-01

    In this paper it is intended to discuss an approach to reduce design costs for subsequent missions by introducing modularity, commonality and multi-mission capability and thereby reuse of mission individual investments into the design of lunar exploration infrastructural systems. The presented approach has been developed within the German Helmholtz-Alliance on Robotic Exploration of Extreme Environments (ROBEX), a research alliance bringing together deep-sea and space research to jointly develop technologies and investigate problems for the exploration of highly inaccessible terrain - be it in the deep sea and polar regions or on the Moon and other planets. Although overall costs are much smaller for deep sea missions as compared to lunar missions, a lot can be learned from modularity approaches in deep sea research infrastructure design, which allows a high operational flexibility in the planning phase of a mission as well as during its implementation. The research presented here is based on a review of existing modular solutions in Earth orbiting satellites as well as science and exploration systems. This is followed by an investigation of lunar exploration scenarios from which we derive requirements for a multi-mission modular architecture. After analyzing possible options, an approach using a bus modular architecture for dedicated subsystems is presented. The approach is based on exchangeable modules e.g. incorporating instruments, which are added to the baseline system platform according to the demands of the specific scenario. It will be described in more detail, including arising problems e.g. in the power or thermal domain. Finally, technological building blocks to put the architecture into practical use will be described more in detail.

  16. Closed-loop neuro-robotic experiments to test computational properties of neuronal networks.

    Science.gov (United States)

    Tessadori, Jacopo; Chiappalone, Michela

    2015-03-02

    Information coding in the Central Nervous System (CNS) remains unexplored. There is mounting evidence that, even at a very low level, the representation of a given stimulus might be dependent on context and history. If this is actually the case, bi-directional interactions between the brain (or if need be a reduced model of it) and sensory-motor system can shed a light on how encoding and decoding of information is performed. Here an experimental system is introduced and described in which the activity of a neuronal element (i.e., a network of neurons extracted from embryonic mammalian hippocampi) is given context and used to control the movement of an artificial agent, while environmental information is fed back to the culture as a sequence of electrical stimuli. This architecture allows a quick selection of diverse encoding, decoding, and learning algorithms to test different hypotheses on the computational properties of neuronal networks.

  17. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  18. Robotic assisted minimally invasive surgery

    Directory of Open Access Journals (Sweden)

    Palep Jaydeep

    2009-01-01

    Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.

  19. Innovation in Robotic Surgery: The Indian Scenario

    Directory of Open Access Journals (Sweden)

    Suresh V Deshpande

    2015-01-01

    Full Text Available Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.

  20. A Motion Planning System for Mobile Robots

    Directory of Open Access Journals (Sweden)

    TUNCER, A.

    2012-02-01

    Full Text Available In this paper, a motion planning system for a mobile robot is proposed. Path planning tries to find a feasible path for mobile robots to move from a starting node to a target node in an environment with obstacles. A genetic algorithm is used to generate an optimal path by taking the advantage of its strong optimization ability. Mobile robot, obstacle and target localizations are realized by means of camera and image processing. A graphical user interface (GUI is designed for the motion planning system that allows the user to interact with the robot system and to observe the robot environment. All the software components of the system are written in MATLAB that provides to use non-predefined accessories rather than the robot firmware has, to avoid confusing in C++ libraries of robot's proprietary software, to control the robot in detail and not to re-compile the programs frequently in real-time dynamic operations.

  1. [Laparoscopic colorectal surgery - SILS, robots, and NOTES.

    NARCIS (Netherlands)

    D'Hoore, André; Wolthuis, Albert M.; Mizrahi, Hagar; Parker, Mike; Bemelman, Willem A.; Wara, Pål

    2011-01-01

    Single incision laparoscopic surgery resection of colon is feasible, but so far evidence of benefit compared to standard laparoscopic technique is lacking. In addition to robot-controlled camera, there is only one robot system on the market capable of performing laparoscopic surgery. The da Vinci

  2. Vision-Based Robot Following Using PID Control

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Pati

    2017-06-01

    Full Text Available Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to follow the master robot. For the master robot, a Bluetooth module receives the user’s command from an android application which is processed by the master robot’s controller, which is used to move the robot. The follower robot receives the image from the Kinect sensor mounted on it and recognizes the master robot. The follower robot identifies the x, y positions by employing the camera and the depth by using the Kinect depth sensor. By identifying the x, y, and z locations of the master robot, the follower robot finds the angle and distance between the master and follower robot, which is given as the error term of a PID controller. Using this, the follower robot follows the master robot. A PID controller is based on feedback and tries to minimize the error. Experiments are conducted for two indigenously developed robots; one depicting a humanoid and the other a small mobile robot. It was observed that the follower robot was easily able to follow the master robot using well-tuned PID parameters.

  3. Optical designs for the Mars '03 rover cameras

    Science.gov (United States)

    Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.

    2001-12-01

    In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.

  4. Robotic assisted andrological surgery

    Science.gov (United States)

    Parekattil, Sijo J; Gudeloglu, Ahmet

    2013-01-01

    The introduction of the operative microscope for andrological surgery in the 1970s provided enhanced magnification and accuracy, unparalleled to any previous visual loop or magnification techniques. This technology revolutionized techniques for microsurgery in andrology. Today, we may be on the verge of a second such revolution by the incorporation of robotic assisted platforms for microsurgery in andrology. Robotic assisted microsurgery is being utilized to a greater degree in andrology and a number of other microsurgical fields, such as ophthalmology, hand surgery, plastics and reconstructive surgery. The potential advantages of robotic assisted platforms include elimination of tremor, improved stability, surgeon ergonomics, scalability of motion, multi-input visual interphases with up to three simultaneous visual views, enhanced magnification, and the ability to manipulate three surgical instruments and cameras simultaneously. This review paper begins with the historical development of robotic microsurgery. It then provides an in-depth presentation of the technique and outcomes of common robotic microsurgical andrological procedures, such as vasectomy reversal, subinguinal varicocelectomy, targeted spermatic cord denervation (for chronic orchialgia) and robotic assisted microsurgical testicular sperm extraction (microTESE). PMID:23241637

  5. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  6. RX130 Robot Calibration

    Science.gov (United States)

    Fugal, Mario

    2012-10-01

    In order to create precision magnets for an experiment at Oak Ridge National Laboratory, a new reverse engineering method has been proposed that uses the magnetic scalar potential to solve for the currents necessary to produce the desired field. To make the magnet it is proposed to use a copper coated G10 form, upon which a drill, mounted on a robotic arm, will carve wires. The accuracy required in the manufacturing of the wires exceeds nominal robot capabilities. However, due to the rigidity as well as the precision servo motor and harmonic gear drivers, there are robots capable of meeting this requirement with proper calibration. Improving the accuracy of an RX130 to be within 35 microns (the accuracy necessary of the wires) is the goal of this project. Using feedback from a displacement sensor, or camera and inverse kinematics it is possible to achieve this accuracy.

  7. NFC - Narrow Field Camera

    Science.gov (United States)

    Koukal, J.; Srba, J.; Gorková, S.

    2015-01-01

    We have been introducing a low-cost CCTV video system for faint meteor monitoring and here we describe the first results from 5 months of two-station operations. Our system called NFC (Narrow Field Camera) with a meteor limiting magnitude around +6.5mag allows research on trajectories of less massive meteoroids within individual parent meteor showers and the sporadic background. At present 4 stations (2 pairs with coordinated fields of view) of NFC system are operated in the frame of CEMeNt (Central European Meteor Network). The heart of each NFC station is a sensitive CCTV camera Watec 902 H2 and a fast cinematographic lens Meopta Meostigmat 1/50 - 52.5 mm (50 mm focal length and fixed aperture f/1.0). In this paper we present the first results based on 1595 individual meteors, 368 of which were recorded from two stations simultaneously. This data set allows the first empirical verification of theoretical assumptions for NFC system capabilities (stellar and meteor magnitude limit, meteor apparent brightness distribution and accuracy of single station measurements) and the first low mass meteoroid trajectory calculations. Our experimental data clearly showed the capabilities of the proposed system for low mass meteor registration and for calculations based on NFC data to lead to a significant refinement in the orbital elements for low mass meteoroids.

  8. CNN-Based Vision Model for Obstacle Avoidance of Mobile Robot

    Directory of Open Access Journals (Sweden)

    Liu Canglong

    2017-01-01

    Full Text Available Exploration in a known or unknown environment for a mobile robot is an essential application. In the paper, we study the mobile robot obstacle avoidance problem in an indoor environment. We present an end-to-end learning model based Convolutional Neural Network (CNN, which takes the raw image obtained from camera as only input. And the method converts directly the raw pixels to steering commands including turn left, turn right and go straight. Training data was collected by a human remotely controlled mobile robot which was manipulated to explore in a structure environment without colliding into obstacles. Our neural network was trained under caffe framework and specific instructions are executed by the Robot Operating System (ROS. We analysis the effect of the datasets from different environments with some marks on training process and several real-time detect experiments were designed. The final test result shows that the accuracy can be improved by increase the marks in a structured environment and our model can get high accuracy on obstacle avoidance for mobile robots.

  9. Sensing and data classification for a robotic meteorite search

    Science.gov (United States)

    Pedersen, Liam; Apostolopoulos, Dimi; Whittaker, William L.; Benedix, Gretchen; Rousch, Ted

    1999-01-01

    Upcoming missions to Mars and the mon call for highly autonomous robots with capability to perform intra-site exploration, reason about their scientific finds, and perform comprehensive on-board analysis of data collected. An ideal case for testing such technologies and robot capabilities is the robotic search for Antarctic meteorites. The successful identification and classification of meteorites depends on sensing modalities and intelligent evaluation of acquired data. Data from color imagery and spectroscopic measurements are used to identify terrestrial rocks and distinguish them from meteorites. However, because of the large number of rocks and the high cost and delay of using some of the sensors, it is necessary to eliminate as many meteorite candidates as possible using cheap long range sensors, such as color cameras. More resource consuming sensor will be held in reserve for the more promising samples only. Bayes networks are used as the formalism for incrementally combing data from multiple sources in a statistically rigorous manner. Furthermore, they can be used to infer the utility of further sensor readings given currently known data. This information, along with cost estimates, in necessary for the sensing system to rationally schedule further sensor reading sand deployments. This paper address issues associated with sensor selection and implementation of an architecture for automatic identification of rocks and meteorites from a mobile robot.

  10. Robot Aesthetics

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Putnam, Lance Jonathan

    This paper considers art-based research practice in robotics through a discussion of our course and relevant research projects in autonomous art. The undergraduate course integrates basic concepts of computer science, robotic art, live performance and aesthetic theory. Through practice...... in robotics research (such as aesthetics, culture and perception), we believe robot aesthetics is an important area for research in contemporary aesthetics....

  11. Filigree Robotics

    DEFF Research Database (Denmark)

    Tamke, Martin; Evers, Henrik Leander; Clausen Nørgaard, Esben

    2016-01-01

    Filigree Robotics experiments with the combination of traditional ceramic craft with robotic fabrication in order to generate a new narrative of fine three-dimensional ceramic ornament for architecture.......Filigree Robotics experiments with the combination of traditional ceramic craft with robotic fabrication in order to generate a new narrative of fine three-dimensional ceramic ornament for architecture....

  12. Determination of Monthly Aerosol Types in Manila Observatory and Notre Dame of Marbel University from Aerosol Robotic Network (AERONET) measurements.

    Science.gov (United States)

    Ong, H. J. J.; Lagrosas, N.; Uy, S. N.; Gacal, G. F. B.; Dorado, S.; Tobias, V., Jr.; Holben, B. N.

    2016-12-01

    This study aims to identify aerosol types in Manila Observatory (MO) and Notre Dame of Marbel University (NDMU) using Aerosol Robotic Network (AERONET) Level 2.0 inversion data and five dimensional specified clustering and Mahalanobis classification. The parameters used are the 440-870 nm extinction Angström exponent (EAE), 440 nm single scattering albedo (SSA), 440-870 nm absorption Angström exponent (AAE), 440 nm real and imaginary refractive indices. Specified clustering makes use of AERONET data from 7 sites to define 7 aerosol classes: mineral dust (MD), polluted dust (PD), urban industrial (UI), urban industrial developing (UID), biomass burning white smoke (BBW), biomass burning dark smoke (BBD), and marine aerosols. This is similar to the classes used by Russell et al, 2014. A data point is classified into a class based on the closest 5-dimensional Mahalanobis distance (Russell et al, 2014 & Hamill et al, 2016). This method is applied to all 173 MO data points from January 2009 to June 2015 and to all 24 NDMU data points from December 2009 to July 2015 to look at monthly and seasonal variations of aerosol types. The MO and NDMU aerosols are predominantly PD ( 77%) and PD & UID ( 75%) respectively (Figs.1a-b); PD is predominant in the months of February to May in MO and February to March in NDMU. PD results from less strict emission and environmental regulations (Catrall 2005). Average SSA values in MO is comparable to the mean SSA for PD ( 0.89). This can be attributed to presence of high absorbing aerosol types, e.g., carbon which is a product of transportation emissions. The second most dominant aerosol type in MO is UID ( 15%), in NDMU it is BBW ( 25%). In Manila, the high sources of PD and UID (fine particles) is generally from vehicular combustion (Oanh, et al 2006). The detection of BBW in MO from April to May can be attributed to the fires which are common in these dry months. In NDMU, BBW source is from biomass burning (smoldering). In this

  13. CHAMP (Camera, Handlens, and Microscope Probe)

    Science.gov (United States)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  14. Epidemic Synchronization in Robotic Swarms

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Nielsen, Jens Frederik Dalsgaard; Ngo, Trung Dung

    2009-01-01

    Clock synchronization in swarms of networked mobile robots is studied in a probabilistic, epidemic framework. In this setting communication and synchonization is considered to be a randomized process, taking place at unplanned instants of geographical rendezvous between robots. In combination wit...

  15. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    Science.gov (United States)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two

  16. Performance evaluation and clinical applications of 3D plenoptic cameras

    Science.gov (United States)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  17. Robotic surgery: urologic implications.

    Science.gov (United States)

    Moran, Michael E

    2003-11-01

    Current medical robots have nothing in common with the anthropomorphic robots in science fiction classics. They are in fact, manipulators, working on a master-slave principle. Robots can be defined as "automatically controlled multitask manipulators, which are freely programmable in three or more spaces." The success of robots in surgery is based on their precision, lack of fatigue, and speed of action. This review describes the theory, advantages, disadvantages, and clinical utilization of mechanical and robotic arm systems to replace the second assistant and provide camera direction and stability during laparoscopic surgery. The Robotrac system (Aesculap, Burlingame, CA), the First Assistant (Leonard Medical Inc, Huntingdon Valley, PA), AESOP (Computer Motion, Goleta, CA), ZEUS (Computer Motion), and the da Vinci (Intuitive Surgical, Mountain View, CA) system are reviewed, as are simple mechanical-assist systems such as Omnitract (Minnesota Scientific, St. Paul, MN), Iron Intern (Automated Medical Products Corp., New York, NY), the Bookwalter retraction system (Codman , Somerville, NJ), the Surgassistant trade mark (Solos Endoscopy, Irvine, CA), the Trocar Sleeve Stabilizer (Richard Wolf Medical Instruments Corp., Rosemont, IL), and the Endoholder (Codman, Somerville, NJ).

  18. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  19. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  20. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  1. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  2. Traffic camera system development

    Science.gov (United States)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  3. A bi-hemispheric neuronal network model of the cerebellum with spontaneous climbing fiber firing produces asymmetrical motor learning during robot control

    Directory of Open Access Journals (Sweden)

    Ruben Dario Pinzon Morales

    2014-11-01

    Full Text Available To acquire and maintain precise movement controls over a lifespan, changes in the physical and physiological characteristics of muscles must be compensated for adaptively. The cerebellum plays a crucial role in such adaptation. Changes in muscle characteristics are not always symmetrical. For example, it is unlikely that muscles that bend and straighten a joint will change to the same degree. Thus, different (i.e., asymmetrical adaptation is required for bending and straightening motions. To date, little is known about the role of the cerebellum in asymmetrical adaptation. Here, we investigate the cerebellar mechanisms required for asymmetrical adaptation using a bi-hemispheric cerebellar neuronal network model (biCNN. The bi-hemispheric structure is inspired by the observation that lesioning one hemisphere reduces motor performance asymmetrically. The biCNN model was constructed to run in real-time and used to control an unstable two-wheeled balancing robot. The load of the robot and its environment were modified to create asymmetrical perturbations. Plasticity at parallel fiber-Purkinje cell synapses in the biCNN model was driven by error signal in the climbing fiber (cf input. This cf input was configured to increase and decrease its firing rate from its spontaneous firing rate (approximately 1 Hz with sensory errors in the preferred and non-preferred direction of each hemisphere, as demonstrated in the monkey cerebellum. Our results showed that asymmetrical conditions were successfully handled by the biCNN model, in contrast to a single hemisphere model or a classical non-adaptive proportional and derivative controller. Further, the spontaneous activity of the cf, while relatively small, was critical for balancing the contribution of each cerebellar hemisphere to the overall motor command sent to the robot. Eliminating the spontaneous activity compromised the asymmetrical learning capabilities of the biCNN model. Thus, we conclude that a bi

  4. Robotic environments

    NARCIS (Netherlands)

    Bier, H.H.

    2011-01-01

    Technological and conceptual advances in fields such as artificial intelligence, robotics, and material science have enabled robotic architectural environments to be implemented and tested in the last decade in virtual and physical prototypes. These prototypes are incorporating sensing-actuating

  5. Hazardous materials emergency response mobile robot

    Science.gov (United States)

    Stone, Henry W.; Lloyd, James W.; Alahuzos, George A.

    1995-08-01

    A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm is presented. This robot inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.

  6. Hazardous materials emergency response mobile robot

    Science.gov (United States)

    Stone, Henry W. (Inventor); Lloyd, James W. (Inventor); Alahuzos, George A. (Inventor)

    1995-01-01

    A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm is presented. This robot inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.

  7. Predicting workload profiles of brain-robot interface and electromygraphic neurofeedback with cortical resting-state networks: personal trait or task-specific challenge?

    Science.gov (United States)

    Fels, Meike; Bauer, Robert; Gharabaghi, Alireza

    2015-08-01

    Objective. Novel rehabilitation strategies apply robot-assisted exercises and neurofeedback tasks to facilitate intensive motor training. We aimed to disentangle task-specific and subject-related contributions to the perceived workload of these interventions and the related cortical activation patterns. Approach. We assessed the perceived workload with the NASA Task Load Index in twenty-one subjects who were exposed to two different feedback tasks in a cross-over design: (i) brain-robot interface (BRI) with haptic/proprioceptive feedback of sensorimotor oscillations related to motor imagery, and (ii) control of neuromuscular activity with feedback of the electromyography (EMG) of the same hand. We also used electroencephalography to examine the cortical activation patterns beforehand in resting state and during the training session of each task. Main results. The workload profile of BRI feedback differed from EMG feedback and was particularly characterized by the experience of frustration. The frustration level was highly correlated across tasks, suggesting subject-related relevance of this workload component. Those subjects who were specifically challenged by the respective tasks could be detected by an interhemispheric alpha-band network in resting state before the training and by their sensorimotor theta-band activation pattern during the exercise. Significance. Neurophysiological profiles in resting state and during the exercise may provide task-independent workload markers for monitoring and matching participants’ ability and task difficulty of neurofeedback interventions.

  8. Robots' Safety

    OpenAIRE

    Pirttilahti, Juho

    2016-01-01

    Human-robot-collaboration is considered one of the answers to the flexible needs of more and more customizing manufacturing. Its purpose is to fit together the best qualities of both human and robots to reduce the cost and time of manufacturing. One of the key questions in this area is safety. The purpose of this thesis was to define the required safety functionality of cartesian, delta and articulated robots based on the current machine needs. Using the future robotic concepts investigat...

  9. Cuspidal Robots

    OpenAIRE

    Wenger, Philippe

    2016-01-01

    This chapter is dedicated to the so-called cuspidal robots, i.e. those robots that can move from one inverse geometric solution to another without meeting a singular confuguration. This feature was discovered quite recently and has then been fascinating a lot of researchers. After a brief history of cuspidal robots, the chapter provides the main features of cuspidal robots: explanation of the non-singular change of posture, uniqueness domains, regions of feasible paths, identification and cla...

  10. An Address Event Representation-Based Processing System for a Biped Robot

    Directory of Open Access Journals (Sweden)

    Uziel Jaramillo-Avila

    2016-02-01

    Full Text Available In recent years, several important advances have been made in the fields of both biologically inspired sensorial processing and locomotion systems, such as Address Event Representation-based cameras (or Dynamic Vision Sensors and in human-like robot locomotion, e.g., the walking of a biped robot. However, making these fields merge properly is not an easy task. In this regard, Neuromorphic Engineering is a fast-growing research field, the main goal of which is the biologically inspired design of hybrid hardware systems in order to mimic neural architectures and to process information in the manner of the brain. However, few robotic applications exist to illustrate them. The main goal of this work is to demonstrate, by creating a closed-loop system using only bio-inspired techniques, how such applications can work properly. We present an algorithm using Spiking Neural Networks (SNN for a biped robot equipped with a Dynamic Vision Sensor, which is designed to follow a line drawn on the floor. This is a commonly used method for demonstrating control techniques. Most of them are fairly simple to implement without very sophisticated components; however, it can still serve as a good test in more elaborate circumstances. In addition, the locomotion system proposed is able to coordinately control the six DOFs of a biped robot in switching between basic forms of movement. The latter has been implemented as a FPGA-based neuromorphic system. Numerical tests and hardware validation are presented.

  11. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    NARCIS (Netherlands)

    Simi, M.; Tolou, N.; Valdastri, P.; Herder, J.L.; Menciassi, A.; Dario, P.

    2012-01-01

    A novel compliant Magnetic Levitation System (MLS) for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The

  12. Research of smart real-time robot navigation system

    Science.gov (United States)

    Rahmani, Budi; Harjoko, A.; Priyambodo, T. K.; Aprilianto, H.

    2016-02-01

    In this paper described how the humanoid robot measures its distance to the orange ball on green floor. We trained the robot camera (CMUcam5) to detect and track the block color of the orange ball. The block color also used to estimate the distance of the camera toward the ball by comparing its block color size when its in the end of field of view and when its near of the camera. Then, using the pythagoras equation we calculate the distance estimation between the whole humanoid robot toward the ball. The distance will be used to estimate how many step the robot must perform to approach the ball and doing another task like kick the ball. The result shows that our method can be used as one of smart navigation system using a camera as the only one sensor to perceive the information of environtment.

  13. Digital Low Frequency Radio Camera

    Science.gov (United States)

    Fullekrug, M.; Mezentsev, A.; Soula, S.; van der Velde, O.; Poupeney, J.; Sudre, C.; Gaffet, S.; Pincon, J.

    2012-04-01

    This contribution reports the design, realization and operation of a novel digital low frequency radio camera towards an exploration of the Earth's electromagnetic environment with particular emphasis on lightning discharges and subsequent atmospheric effects such as transient luminous events. The design of the digital low frequency radio camera is based on the idea of radio interferometry with a network of radio receivers which are separated by spatial baselines comparable to the wavelength of the observed radio waves, i.e., ~1-100 km which corresponds to a frequency range from ~3-300 kHz. The key parameter towards the realization of the radio interferometer is the frequency dependent slowness of the radio waves within the Earth's atmosphere with respect to the speed of light in vacuum. This slowness is measured with the radio interferometer by using well documented radio transmitters. The digital low frequency radio camera can be operated in different modes. In the imaging mode, still photographs show maps of the low frequency radio sky. In the video mode, movies show the dynamics of the low frequency radio sky. The exposure time of the photograhps, the frame rate of the video, and the radio frequency of interest can be adjusted by the observer. Alternatively, the digital radio camera can be used in the monitoring mode, where a particular area of the sky is observed continuously. The first application of the digital low frequency radio camera is to characterize the electromagnetic energy emanating from sprite producing lightning discharges, but it is expected that it can also be used to identify and investigate numerous other radio sources of the Earth's electromagnetic environment.

  14. Intelligent networked teleoperation control

    CERN Document Server

    Li, Zhijun; Su, Chun-Yi

    2015-01-01

    This book describes a unified framework for networked teleoperation systems involving multiple research fields: networked control systems for linear and nonlinear forms, bilateral teleoperation, trilateral teleoperation, multilateral teleoperation and cooperative teleoperation. It closely examines networked control as a field at the intersection of systems & control and robotics and presents a number of experimental case studies on testbeds for robotic systems, including networked haptic devices, robotic network systems and sensor network systems. The concepts and results outlined are easy to understand, even for readers fairly new to the subject. As such, the book offers a valuable reference work for researchers and engineers in the fields of systems & control and robotics.

  15. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  16. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  17. Posture Recognition with a Top-view Camera

    NARCIS (Netherlands)

    Hu, N.; Englebienne, G.; Kröse, B.; Sugano, S.; Kaneko, M.

    2013-01-01

    We describe a system that recognizes human postures with heavy self-occlusion. In particular, we address posture recognition in a robot assisted-living scenario, where the environment is equipped with a top-view camera for monitoring human activities. This setup is very useful because top-view

  18. Situational Awareness from a Low-Cost Camera System

    Science.gov (United States)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  19. Robotics research in Chile

    Directory of Open Access Journals (Sweden)

    Javier Ruiz-del-Solar

    2016-12-01

    Full Text Available The development of research in robotics in a developing country is a challenging task. Factors such as low research funds, low trust from local companies and the government, and a small number of qualified researchers hinder the development of strong, local research groups. In this article, and as a case of study, we present our research group in robotics at the Advanced Mining Technology Center of the Universidad de Chile, and the way in which we have addressed these challenges. In 2008, we decided to focus our research efforts in mining, which is the main industry in Chile. We observed that this industry has needs in terms of safety, productivity, operational continuity, and environmental care. All these needs could be addressed with robotics and automation technology. In a first stage, we concentrate ourselves in building capabilities in field robotics, starting with the automation of a commercial vehicle. An important outcome of this project was the earn of the local mining industry confidence. Then, in a second stage started in 2012, we began working with the local mining industry in technological projects. In this article, we describe three of the technological projects that we have developed with industry support: (i an autonomous vehicle for mining environments without global positioning system coverage; (ii the inspection of the irrigation flow in heap leach piles using unmanned aerial vehicles and thermal cameras; and (iii an enhanced vision system for vehicle teleoperation in adverse climatic conditions.

  20. Robot Mechanisms

    CERN Document Server

    Lenarcic, Jadran; Stanišić, Michael M

    2013-01-01

    This book provides a comprehensive introduction to the area of robot mechanisms, primarily considering industrial manipulators and humanoid arms. The book is intended for both teaching and self-study. Emphasis is given to the fundamentals of kinematic analysis and the design of robot mechanisms. The coverage of topics is untypical. The focus is on robot kinematics. The book creates a balance between theoretical and practical aspects in the development and application of robot mechanisms, and includes the latest achievements and trends in robot science and technology.

  1. The VISTA infrared camera

    Science.gov (United States)

    Dalton, G. B.; Caldwell, M.; Ward, A. K.; Whalley, M. S.; Woodhouse, G.; Edeson, R. L.; Clark, P.; Beard, S. M.; Gallie, A. M.; Todd, S. P.; Strachan, J. M. D.; Bezawada, N. N.; Sutherland, W. J.; Emerson, J. P.

    2006-06-01

    We describe the integration and test phase of the construction of the VISTA Infrared Camera, a 64 Megapixel, 1.65 degree field of view 0.9-2.4 micron camera which will soon be operating at the cassegrain focus of the 4m VISTA telescope. The camera incorporates sixteen IR detectors and six CCD detectors which are used to provide autoguiding and wavefront sensing information to the VISTA telescope control system.

  2. Streak camera meeting summary

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bliss, David E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Streak cameras are important for high-speed data acquisition in single event experiments, where the total recorded information (I) is shared between the number of measurements (M) and the number of samples (S). Topics of this meeting included: streak camera use at the national laboratories; current streak camera production; new tube developments and alternative technologies; and future planning. Each topic is summarized in the following sections.

  3. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  4. FPGA for Robotic Applications: from Android/Humanoid Robots to Artificial Men

    Directory of Open Access Journals (Sweden)

    Tole Sutikno

    2011-12-01

    Full Text Available Researches on home robots have been increasing enormously. There has always existed a continuous research effort on problems of anthropomorphic robots which is now called humanoid robots. Currently, robotics has evolved to the point that different branches have reached a remarkable level of maturity, that neural network and fuzzy logic are the main artificial intelligence as intelligent control on the robotics. Despite all this progress, while aiming at accomplishing work-tasks originally charged only to humans, robotic science has perhaps quite naturally turned into the attempt to create artificial men. It is true that artificial men or android humanoid robots open certainly very broad prospects. This “robot” may be viewed as a personal helper, and it will be called a home-robot, or personal robot. This is main reason why the two special sections are issued in the TELKOMNIKA sequentially.

  5. Complete calibration of the autonomous hand-eye robot JANUS

    OpenAIRE

    Garcia, C.

    1998-01-01

    The current prototype of the JANUS robot consists of two arms and a neck with two mounted color cameras. Calibrating JANUS means to find the geometric relationships between each of these components and the reference coordinate system of the robot. The calibration procedure that we present in this report is completely vision-based: the relationships between each camera and the neck and between each arm and the neck are determined using visual measurements, which leads to a low-cost and automat...

  6. Vitruvian Robot

    DEFF Research Database (Denmark)

    Hasse, Cathrine

    2017-01-01

    Robots are simultaneously real machines and technical images that challenge our sense of self. In the Open Forum I discuss the movie Ex Machina by director Alex Garland. The robot Ava, played by Alicia Vikander, is a rare portrait of what could be interpreted as a feminist robot (and...... there are spoilers ahead for any readers unfamiliar with this movie). Though she apparently is created as the dream of the ‘perfect woman’, sexy and beautiful, she also develops and urges to free herself from the slavery of her creator, Nathan Bateman. She is a robot created along the perfect dimensions...... as a Vitruvian robot but is also a creature which could be interpreted as a human being. However, the point I want to raise is not whether Ava’s reaction to robot slavery is justified or not but how her portrait raises questions about the blurred lines between reality and fiction when we discuss our robotic...

  7. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  8. Grasping objects from a user’s hand using time-of-flight camera data

    CSIR Research Space (South Africa)

    Govender, N

    2010-11-01

    Full Text Available . This paper presents a system which allows a robotic arm manipulator to grasp any moving object from a user’s hand and releases the object when indicated to do so. Data from a Time-of-Flight camera is fused with an ordinary laboratory camera to create a robust...

  9. Rugged Walking Robot

    Science.gov (United States)

    Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.

    1990-01-01

    Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.

  10. VLSI-distributed architectures for smart cameras

    Science.gov (United States)

    Wolf, Wayne H.

    2001-03-01

    Smart cameras use video/image processing algorithms to capture images as objects, not as pixels. This paper describes architectures for smart cameras that take advantage of VLSI to improve the capabilities and performance of smart camera systems. Advances in VLSI technology aid in the development of smart cameras in two ways. First, VLSI allows us to integrate large amounts of processing power and memory along with image sensors. CMOS sensors are rapidly improving in performance, allowing us to integrate sensors, logic, and memory on the same chip. As we become able to build chips with hundreds of millions of transistors, we will be able to include powerful multiprocessors on the same chip as the image sensors. We call these image sensor/multiprocessor systems image processors. Second, VLSI allows us to put a large number of these powerful sensor/processor systems on a single scene. VLSI factories will produce large quantities of these image processors, making it cost-effective to use a large number of them in a single location. Image processors will be networked into distributed cameras that use many sensors as well as the full computational resources of all the available multiprocessors. Multiple cameras make a number of image recognition tasks easier: we can select the best view of an object, eliminate occlusions, and use 3D information to improve the accuracy of object recognition. This paper outlines approaches to distributed camera design: architectures for image processors and distributed cameras; algorithms to run on distributed smart cameras, and applications of which VLSI distributed camera systems.

  11. Embedded mobile farm robot for identification of diseased plants

    Science.gov (United States)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  12. EVA Robotic Assistant Project: Platform Attitude Prediction

    Science.gov (United States)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways

  13. Improved Tracking of Targets by Cameras on a Mars Rover

    Science.gov (United States)

    Kim, Won; Ansar, Adnan; Steele, Robert

    2007-01-01

    A paper describes a method devised to increase the robustness and accuracy of tracking of targets by means of three stereoscopic pairs of video cameras on a Mars-rover-type exploratory robotic vehicle. Two of the camera pairs are mounted on a mast that can be adjusted in pan and tilt; the third camera pair is mounted on the main vehicle body. Elements of the method include a mast calibration, a camera-pointing algorithm, and a purely geometric technique for handing off tracking between different camera pairs at critical distances as the rover approaches a target of interest. The mast calibration is an extension of camera calibration in which the camera images of calibration targets at known positions are collected at various pan and tilt angles. In the camerapointing algorithm, pan and tilt angles are computed by a closed-form, non-iterative solution of inverse kinematics of the mast combined with mathematical models of the cameras. The purely geometric camera-handoff technique involves the use of stereoscopic views of a target of interest in conjunction with the mast calibration.

  14. Robot welding process control

    Science.gov (United States)

    Romine, Peter L.

    1991-01-01

    This final report documents the development and installation of software and hardware for Robotic Welding Process Control. Primary emphasis is on serial communications between the CYRO 750 robotic welder, Heurikon minicomputer running Hunter & Ready VRTX, and an IBM PC/AT, for offline programming and control and closed-loop welding control. The requirements for completion of the implementation of the Rocketdyne weld tracking control are discussed. The procedure for downloading programs from the Intergraph, over the network, is discussed. Conclusions are made on the results of this task, and recommendations are made for efficient implementation of communications, weld process control development, and advanced process control procedures using the Heurikon.

  15. Using Educational Robotics to Motivate Complete AI Solutions

    OpenAIRE

    Greenwald, Lloyd; Artz, Donovan; Mehta, Yogi; Shirmohammadi, Babak

    2006-01-01

    Robotics is a remarkable domain that may be successfully employed in the classroom both to motivate students to tackle hard AI topics and to provide students experience applying AI representations and algorithms to real-world problems. This article uses two example robotics problems to illustrate these themes. We show how the robot obstacle-detection problem can motivate learning neural networks and Bayesian networks. We also show how the robot-localization problem can motivate learning how t...

  16. Mearsurement and control system for agricultural robot

    Science.gov (United States)

    Sun, Tong; Zhang, Fangming; Ying, Yibin

    2006-10-01

    Automation of agricultural equipments in the near term appears both economically viable and technically feasible. This paper describes measurement and control system for agriculture robot. It consists of a computer, a pair of NIR cameras, one inclinometer, one potentionmeter and two encoders. Inclinometer, potentionmeter and encoders are used to measure obliquity of camera, turning angle of front-wheel and velocity of rear wheel, respectively. These sensor data are filtered before sending to PC. The test shows that the system can measure turning angle of front-wheel and velocity of rear wheel accurately whether robot is at stillness state or at motion state.

  17. Visual Control of Robots Using Range Images

    Directory of Open Access Journals (Sweden)

    Fernando Torres

    2010-08-01

    Full Text Available In the last years, 3D-vision systems based on the time-of-flight (ToF principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  18. Solving the robot-world, hand-eye(s) calibration problem with iterative methods

    Science.gov (United States)

    Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...

  19. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  20. Robotic surgery.

    Science.gov (United States)

    Diana, M; Marescaux, J

    2015-01-01

    Proficiency in minimally invasive surgery requires intensive and continuous training, as it is technically challenging for unnatural visual and haptic perceptions. Robotic and computer sciences are producing innovations to augment the surgeon's skills to achieve accuracy and high precision during complex surgery. This article reviews the current use of robotically assisted surgery, focusing on technology as well as main applications in digestive surgery, and future perspectives. The PubMed database was interrogated to retrieve evidence-based data on surgical applications. Internal and external consulting with key opinion leaders, renowned robotics laboratories and robotic platform manufacturers was used to produce state-of-the art business intelligence around robotically assisted surgery. Selected digestive procedures (oesophagectomy, gastric bypass, pancreatic and liver resections, rectal resection for cancer) might benefit from robotic assistance, although the current level of evidence is insufficient to support widespread adoption. The surgical robotic market is growing, and a variety of projects have recently been launched at both academic and corporate levels to develop lightweight, miniaturized surgical robotic prototypes. The magnified view, and improved ergonomics and dexterity offered by robotic platforms, might facilitate the uptake of minimally invasive procedures. Image guidance to complement robotically assisted procedures, through the concepts of augmented reality, could well represent a major revolution to increase safety and deal with difficulties associated with the new minimally invasive approaches. © 2015 BJS Society Ltd. Published by John Wiley & Sons Ltd.

  1. A Dataset for Camera Independent Color Constancy.

    Science.gov (United States)

    Aytekin, Caglar; Nikkanen, Jarno; Gabbouj, Moncef

    2017-10-17

    In this paper, we provide a novel dataset designed for camera independent color constancy research. Camera independence corresponds to the robustness of an algorithm's performance when run on images of the same scene taken by different cameras. Accordingly, the images in our database correspond to several lab and field scenes each of which is captured by three different cameras with minimal registration errors. The lab scenes are also captured under five different illuminations. The spectral responses of cameras and the spectral power distributions of the lab light sources are also provided, as they may prove beneficial for training future algorithms to achieve color constancy. For a fair evaluation of future methods, we provide guidelines for supervised methods with indicated training, validation and testing partitions. Accordingly, we evaluate two recently proposed convolutional neural network based color constancy algorithms as baselines for future research. As a side contribution, this dataset also includes images taken by a mobile camera with color shading corrected and uncorrected results. This allows research on the effect of color shading as well.In this paper, we provide a novel dataset designed for camera independent color constancy research. Camera independence corresponds to the robustness of an algorithm's performance when run on images of the same scene taken by different cameras. Accordingly, the images in our database correspond to several lab and field scenes each of which is captured by three different cameras with minimal registration errors. The lab scenes are also captured under five different illuminations. The spectral responses of cameras and the spectral power distributions of the lab light sources are also provided, as they may prove beneficial for training future algorithms to achieve color constancy. For a fair evaluation of future methods, we provide guidelines for supervised methods with indicated training, validation and testing

  2. Autonomous caregiver following robotic wheelchair

    Science.gov (United States)

    Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary

    2011-12-01

    In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.

  3. Robotic buildings(s)

    NARCIS (Netherlands)

    Bier, H.H.

    2014-01-01

    Technological and conceptual advances in fields such as artificial intelligence, robotics, and material science have enabled robotic building to be in the last decade prototypically implemented. In this context, robotic building implies both physically built robotic environments and robotically

  4. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    OpenAIRE

    Flavio Roberti; Juan Marcos Toibero; Carlos Soria; Raquel Frizera Vassallo; Ricardo Carelli

    2009-01-01

    This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras) for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimen...

  5. A step toward robot teleoperation with eyes and hand

    OpenAIRE

    Despinoy, Fabien; Vitrani, Marie-Aude; Herman, Benoît; 4th International Workshop on Human-Friendly Robotics (HFR 2011)

    2012-01-01

    Despite the continuous improvement of master interfaces, distant robot teleoperation remains a challenging task. In many applications (e.g. spaceships, underwater or flying drones, robotic arms that operate in hazardous conditions in factories), the operator has only an indirect vision of the remote environment, provided by a video camera usually mounted on the robot end-effector itself, and displayed on a 2D monitor. Whereas any controller is capable of planning a path to follow a prescribed...

  6. Rapid 3D Modeling and Parts Recognition on Automotive Vehicles Using a Network of RGB-D Sensors for Robot Guidance

    Directory of Open Access Journals (Sweden)

    Alberto Chávez-Aragón

    2013-01-01

    Full Text Available This paper presents an approach for the automatic detection and fast 3D profiling of lateral body panels of vehicles. The work introduces a method to integrate raw streams from depth sensors in the task of 3D profiling and reconstruction and a methodology for the extrinsic calibration of a network of Kinect sensors. This sensing framework is intended for rapidly providing a robot with enough spatial information to interact with automobile panels using various tools. When a vehicle is positioned inside the defined scanning area, a collection of reference parts on the bodywork are automatically recognized from a mosaic of color images collected by a network of Kinect sensors distributed around the vehicle and a global frame of reference is set up. Sections of the depth information on one side of the vehicle are then collected, aligned, and merged into a global RGB-D model. Finally, a 3D triangular mesh modelling the body panels of the vehicle is automatically built. The approach has applications in the intelligent transportation industry, automated vehicle inspection, quality control, automatic car wash systems, automotive production lines, and scan alignment and interpretation.

  7. Intraoperative navigation in robotically assisted compartmental surgery of uterine cancer by visualisation of embryologically derived lymphatic networks with indocyanine-green (ICG).

    Science.gov (United States)

    Kimmig, Rainer; Aktas, Bahriye; Buderath, Paul; Rusch, Peter; Heubner, Martin

    2016-04-01

    To evaluate feasibility of intraoperative visualization of embryologically defined organ compartments and their drainage by ICG in uterine cancer. Total of 2.5 mg of ICG have been injected into cervix or corpus in uterine cancer patients immediately prior to surgery. Green fluorescence was intermittently detected during robotically assisted laparoscopic surgery (Firefly System®, Intuitve Surgical Inc.). Total of 36 patients with uterine cancer without macroscopically suspicious nodes were evaluated with respect to their compartmental lymphatic network, collecting lymphatic vessels, and the connection to the postponed lymph basins. Müllerian (sub) compartment and transport of lymph fluid along the lymphatic collectors and connecting vessels to the postponed lymph basins could be visualized invariably in all patients. Cervix drained along the ligamentous and caudal part of vascular mesometria, whereas midcorporal and fundal drainage occurred along the upper part of vascular mesometria and along the mesonephric pathway along the ovarian vessels. Visualization of lymphatic network and downstream flow of lymphatic fluid to the postponed lymph basins by ICG is feasible; it can be used to navigate along compartment boarders for education, intraoperative orientation, and quality control. It seems to confirm the compartmental order of pelvic organ systems and postponed lymph basins. J. Surg. Oncol. 2016;113:554-559. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Robotic Rock Classification

    Science.gov (United States)

    Hebert, Martial

    1999-01-01

    This report describes a three-month research program undertook jointly by the Robotics Institute at Carnegie Mellon University and Ames Research Center as part of the Ames' Joint Research Initiative (JRI.) The work was conducted at the Ames Research Center by Mr. Liam Pedersen, a graduate student in the CMU Ph.D. program in Robotics under the supervision Dr. Ted Roush at the Space Science Division of the Ames Research Center from May 15 1999 to August 15, 1999. Dr. Martial Hebert is Mr. Pedersen's research adviser at CMU and is Principal Investigator of this Grant. The goal of this project is to investigate and implement methods suitable for a robotic rover to autonomously identify rocks and minerals in its vicinity, and to statistically characterize the local geological environment. Although primary sensors for these tasks are a reflection spectrometer and color camera, the goal is to create a framework under which data from multiple sensors, and multiple readings on the same object, can be combined in a principled manner. Furthermore, it is envisioned that knowledge of the local area, either a priori or gathered by the robot, will be used to improve classification accuracy. The key results obtained during this project are: The continuation of the development of a rock classifier; development of theoretical statistical methods; development of methods for evaluating and selecting sensors; and experimentation with data mining techniques on the Ames spectral library. The results of this work are being applied at CMU, in particular in the context of the Winter 99 Antarctica expedition in which the classification techniques will be used on the Nomad robot. Conversely, the software developed based on those techniques will continue to be made available to NASA Ames and the data collected from the Nomad experiments will also be made available.

  9. Robust real-time robot-world calibration for robotized transcranial magnetic stimulation.

    Science.gov (United States)

    Richter, Lars; Ernst, Floris; Schlaefer, Alexander; Schweikard, Achim

    2011-12-01

    For robotized transcranial magnetic stimulation (TMS), the magnetic coil is placed on the patient's head by a robot. As the robotized TMS system requires tracking of head movements, robot and tracking camera need to be calibrated. However, for robotized TMS in a clinical setting, such calibration is required frequently. Mounting/unmounting a marker to the end effector and moving the robot into different poses is impractical. Moreover, if either system is moved during treatment, recalibration is required. To overcome this limitation, we propose to directly track a marker at link three of the articulated arm. Using forward kinematics and a constant marker transform to link three, the calibration can be performed instantly. Our experimental results indicate an accuracy similar to standard hand-eye calibration approaches. It also outperforms classical hand-held navigated TMS systems. This robust online calibration greatly enhances the system's user-friendliness and safety. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Communicating Cooperative Robots with Bluetooth

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Son, L.T.; Madsen, Ole Brun

    2001-01-01

    A generic architecture for system of cooperating communicating mobile robots is presented. An overall structure is defined from a modularity viewpoint, where a number of generic modules are identified; low level communication interface, network layer services such as initial and adaptive network...

  11. Delta Robot

    OpenAIRE

    Herder, J.L.; van der Wijk, V.

    2010-01-01

    The invention relates to a delta robot comprising a stationary base (2) and a movable platform (3) that is connected to the base with three chains of links (4,5,6), and comprising a balancing system incorporating at least one pantograph (7) for balancing the robot's center of mass, wherein the at least one pantograph has a first free extremity (10) at which it supports a countermass (13) which is arranged to balance the center of mass of the robot.

  12. Kitt Peak speckle camera.

    Science.gov (United States)

    Breckinridge, J B; McAlister, H A; Robinson, W G

    1979-04-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  13. Mars Observer Camera

    OpenAIRE

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; J. Veverka(Massachusetts Institute of Technology, Cambridge, U.S.A.); Ravine, M. A.; Soulanille, T.A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the “push broom” technique; that is, they do not take “frames” but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope f...

  14. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  15. Autonomous Mobile Robot That Can Read

    Directory of Open Access Journals (Sweden)

    Létourneau Dominic

    2004-01-01

    Full Text Available The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233 MHz and a Sony EVI-D30 pan-tilt-zoom camera.

  16. Robot umanoidi o robot umani?

    Directory of Open Access Journals (Sweden)

    Domenico Parisi

    2009-01-01

    Full Text Available Che cosa e' un robot? A che cosa serve un robot? Un robot e' qualcosa di fisico, costruito da noi, che somiglia a un organismo vivente e si comporta come un organismo vivente. Gli organismi viventi comprendono gli animali e le piante, ma i robot riproducono gli animali piuttosto che le piante, anche se ci sono tentativi di costruire robotpiante. Comportarsi come un animale significa avere degli organi sensoriali con cui ricevere informazioni dall'ambiente e degli organi motori che permettono di spostarsi nell'ambiente o di muovere una qualche parte del proprio corpo, ad esempio la testa o un braccio, in maniera non programmata, ma autonoma, cioe' rispondendo agli stimoli che arrivano momento per momento ai sensori del robot. Questo risponde alla domanda "Che cosa e' un robot?".

  17. Optical Flow based Robot Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Kahlouche Souhila

    2008-11-01

    Full Text Available In this paper we try to develop an algorithm for visual obstacle avoidance of autonomous mobile robot. The input of the algorithm is an image sequence grabbed by an embedded camera on the B21r robot in motion. Then, the optical flow information is extracted from the image sequence in order to be used in the navigation algorithm. The optical flow provides very important information about the robot environment, like: the obstacles disposition, the robot heading, the time to collision and the depth. The strategy consists in balancing the amount of left and right side flow to avoid obstacles, this technique allows robot navigation without any collision with obstacles. The robustness of the algorithm will be showed by some examples.

  18. Optical Flow Based Robot Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Kahlouche Souhila

    2007-03-01

    Full Text Available In this paper we try to develop an algorithm for visual obstacle avoidance of autonomous mobile robot. The input of the algorithm is an image sequence grabbed by an embedded camera on the B21r robot in motion. Then, the optical flow information is extracted from the image sequence in order to be used in the navigation algorithm. The optical flow provides very important information about the robot environment, like: the obstacles disposition, the robot heading, the time to collision and the depth. The strategy consists in balancing the amount of left and right side flow to avoid obstacles, this technique allows robot navigation without any collision with obstacles. The robustness of the algorithm will be showed by some examples.

  19. Collision-free motion coordination of heterogeneous robots

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Nak Yong [Chosun University, Gwangju (Korea, Republic of); Seo, Dong Jin [RedOne Technologies, Gwangju (Korea, Republic of); Simmons, Reid G. [Carnegie Mellon University, Pennsylvania (United States)

    2008-11-15

    This paper proposes a method to coordinate the motion of multiple heterogeneous robots on a network. The proposed method uses prioritization and avoidance. Priority is assigned to each robot; a robot with lower priority avoids the robots of higher priority. To avoid collision with other robots, elastic force and potential field force are used. Also, the method can be applied separately to the motion planning of a part of a robot from that of the other parts of the robot. This is useful for application to the robots of the type mobile manipulator or highly redundant robots. The method is tested by simulation, and it results in smooth and adaptive coordination in an environment with multiple heterogeneous robots

  20. Android/GUI Controlled Bluetooth Spy Robot (SPY-BOT)

    OpenAIRE

    Nirmal K Thomas; Radhika S; ShyamMohanan.C; Sreelakshmi Rajan; Deepak S Koovackal; Suvarna A

    2017-01-01

    The robotic vehicle can be controlled by a PC or an android device. The GUI interface used to control the robot is developed in MATLAB, and the android app is developed using MIT appinventor 2. The robotic vehicle is equipped with an IP web camera for GUI interface and another wireless camera having night vision capability for remote monitoring/spying purposes. This system can be used in sensitive areas where humans cannot enter directly. The commands provided by android application/GUI inter...