WorldWideScience

Sample records for networked robotic cameras

  1. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  2. Convolutional Neural Network-Based Embarrassing Situation Detection under Camera for Social Robot in Smart Homes.

    Science.gov (United States)

    Yang, Guanci; Yang, Jing; Sheng, Weihua; Junior, Francisco Erivaldo Fernandes; Li, Shaobo

    2018-05-12

    Recent research has shown that the ubiquitous use of cameras and voice monitoring equipment in a home environment can raise privacy concerns and affect human mental health. This can be a major obstacle to the deployment of smart home systems for elderly or disabled care. This study uses a social robot to detect embarrassing situations. Firstly, we designed an improved neural network structure based on the You Only Look Once (YOLO) model to obtain feature information. By focusing on reducing area redundancy and computation time, we proposed a bounding-box merging algorithm based on region proposal networks (B-RPN), to merge the areas that have similar features and determine the borders of the bounding box. Thereafter, we designed a feature extraction algorithm based on our improved YOLO and B-RPN, called F-YOLO, for our training datasets, and then proposed a real-time object detection algorithm based on F-YOLO (RODA-FY). We implemented RODA-FY and compared models on our MAT social robot. Secondly, we considered six types of situations in smart homes, and developed training and validation datasets, containing 2580 and 360 images, respectively. Meanwhile, we designed three types of experiments with four types of test datasets composed of 960 sample images. Thirdly, we analyzed how a different number of training iterations affects our prediction estimation, and then we explored the relationship between recognition accuracy and learning rates. Our results show that our proposed privacy detection system can recognize designed situations in the smart home with an acceptable recognition accuracy of 94.48%. Finally, we compared the results among RODA-FY, Inception V3, and YOLO, which indicate that our proposed RODA-FY outperforms the other comparison models in recognition accuracy.

  3. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  4. Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    Science.gov (United States)

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2013-01-01

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604

  5. Self-organized multi-camera network for a fast and easy deployment of ubiquitous robots in unknown environments.

    Science.gov (United States)

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2012-12-27

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.

  6. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  7. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  8. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  9. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  10. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  11. Real Time Indoor Robot Localization Using a Stationary Fisheye Camera

    OpenAIRE

    Delibasis , Konstantinos ,; Plagianakos , Vasilios ,; Maglogiannis , Ilias

    2013-01-01

    Part 7: Intelligent Signal and Image Processing; International audience; A core problem in robotics is the localization of a mobile robot (determination of the location or pose) in its environment, since the robot’s behavior depends on its position. In this work, we propose the use of a stationary fisheye camera for real time robot localization in indoor environments. We employ an image formation model for the fisheye camera, which is used for accelerating the segmentation of the robot’s top ...

  12. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    A working group (WG) study was conducted aiming at realizing human type robots. The following six working groups in the basement field were organized to study in terms mostly of items of technical development and final technical targets: platform, and remote attendance control in the basement field, maintenance of plant, etc., home service, disaster/construction, and entertainment in the application field. In the platform WG, a robot of human like form is planning which walks with two legs and works with two arms, and the following were discussed: a length of 160cm, weight of 110kg, built-in LAN, actuator specifications, modulated structure, intelligent driver, etc. In the remote attendance control WG, remote control using working function, stabilized movement, stabilized control, and network is made possible. Studied were made on the decision on a remote control cockpit by open architecture added with function and reformable, problems on the development of the standard language, etc. 77 ref., 82 figs., 21 tabs.

  13. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  14. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  15. Positioning the laparoscopic camera with industrial robot arm

    DEFF Research Database (Denmark)

    Capolei, Marie Claire; Wu, Haiyan; Andersen, Nils Axel

    2017-01-01

    This paper introduces a solution for the movement control of the laparoscopic camera employing a teleoperated robotic assistant. The project propose an autonomous robotic solution based on an industrial manipulator, provided with a modular software which is applicable to large scale. The robot arm...... industrial robot arm is designated to accomplish this manipulation task. The software is implemented in ROS in order to facilitate future extensions. The experimental results shows a manipulator capable of moving fast and smoothly the surgical tool around a remote center of motion....

  16. Indirect iterative learning control for a discrete visual servo without a camera-robot model.

    Science.gov (United States)

    Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan

    2007-08-01

    This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.

  17. Performance of Very Small Robotic Fish Equipped with CMOS Camera

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    2015-10-01

    Full Text Available Underwater robots are often used to investigate marine animals. Ideally, such robots should be in the shape of fish so that they can easily go unnoticed by aquatic animals. In addition, lacking a screw propeller, a robotic fish would be less likely to become entangled in algae and other plants. However, although such robots have been developed, their swimming speed is significantly lower than that of real fish. Since to carry out a survey of actual fish a robotic fish would be required to follow them, it is necessary to improve the performance of the propulsion system. In the present study, a small robotic fish (SAPPA was manufactured and its propulsive performance was evaluated. SAPPA was developed to swim in bodies of freshwater such as rivers, and was equipped with a small CMOS camera with a wide-angle lens in order to photograph live fish. The maximum swimming speed of the robot was determined to be 111 mm/s, and its turning radius was 125 mm. Its power consumption was as low as 1.82 W. During trials, SAPPA succeeded in recognizing a goldfish and capturing an image of it using its CMOS camera.

  18. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  19. Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

    Directory of Open Access Journals (Sweden)

    Miguel A. Trujano

    2012-10-01

    Full Text Available This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin-Krassovsky-LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed-loop performance.

  20. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Agustín Ortega

    2014-07-01

    Full Text Available Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies between image coordinates and world points in the ground plane (walking areas to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME patio at the Universitat Politècnica de Catalunya (UPC.

  1. Cooperative robots and sensor networks

    CERN Document Server

    Khelil, Abdelmajid

    2014-01-01

    Mobile robots and Wireless Sensor Networks (WSNs) have enabled great potentials and a large space for ubiquitous and pervasive applications. Robotics and WSNs have mostly been considered as separate research fields and little work has investigated the marriage between these two technologies. However, these two technologies share several features, enable common cyber-physical applications and provide complementary support to each other.
 The primary objective of book is to provide a reference for cutting-edge studies and research trends pertaining to robotics and sensor networks, and in particular for the coupling between them. The book consists of five chapters. The first chapter presents a cooperation strategy for teams of multiple autonomous vehicles to solve the rendezvous problem. The second chapter is motivated by the need to improve existing solutions that deal with connectivity prediction, and proposed a genetic machine learning approach for link-quality prediction. The third chapter presents an arch...

  2. Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2015-01-01

    Full Text Available For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system.

  3. Implementation of self-organizing neural networks for visuo-motor control of an industrial robot.

    Science.gov (United States)

    Walter, J A; Schulten, K I

    1993-01-01

    The implementation of two neural network algorithms for visuo-motor control of an industrial robot (Puma 562) is reported. The first algorithm uses a vector quantization technique, the ;neural-gas' network, together with an error correction scheme based on a Widrow-Hoff-type learning rule. The second algorithm employs an extended self-organizing feature map algorithm. Based on visual information provided by two cameras, the robot learns to position its end effector without an external teacher. Within only 3000 training steps, the robot-camera system is capable of reducing the positioning error of the robot's end effector to approximately 0.1% of the linear dimension of the work space. By employing adaptive feedback the robot succeeds in compensating not only slow calibration drifts, but also sudden changes in its geometry. Hardware aspects of the robot-camera system are discussed.

  4. Performance benefits and limitations of a camera network

    Science.gov (United States)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  5. Ocean Robotic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Schofield, Oscar [Rutgers University

    2012-05-23

    We live on an ocean planet which is central to regulating the Earth’s climate and human society. Despite the importance of understanding the processes operating in the ocean, it remains chronically undersampled due to the harsh operating conditions. This is problematic given the limited long term information available about how the ocean is changing. The changes include rising sea level, declining sea ice, ocean acidification, and the decline of mega fauna. While the changes are daunting, oceanography is in the midst of a technical revolution with the expansion of numerical modeling techniques, combined with ocean robotics. Operating together, these systems represent a new generation of ocean observatories. I will review the evolution of these ocean observatories and provide a few case examples of the science that they enable, spanning from the waters offshore New Jersey to the remote waters of the Southern Ocean.

  6. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  7. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  8. Poster: A Software-Defined Multi-Camera Network

    OpenAIRE

    Chen, Po-Yen; Chen, Chien; Selvaraj, Parthiban; Claesen, Luc

    2016-01-01

    The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in SoftwareDefined Networking (SDN). In this work, we propose the architecture of a Software-Defined Multi-Camera Network consisting of small, flexible, economic, and programmable cameras which combine the functions of the processor, switch, and camera. A Software-Defined Multi-Camera Network can effectively reduce the overall network bandwidth and reduce a large amount of the Capex a...

  9. Wireless Visual Sensor Network Robots- Based for the Emulation of Collective Behavior

    Directory of Open Access Journals (Sweden)

    Fredy Hernán Martinez Sarmiento

    2012-03-01

    Full Text Available We consider the problem of bacterial quorum sensing emulate on small mobile robots. Robots that reflect the behavior of bacteria are designed as mobile wireless camera nodes. They are able to structure a dynamic wireless sensor network. Emulated behavior corresponds to a simplification of bacterial quorum sensing, where the action of a network node is conditioned by the population density of robots(nodes in a given area. The population density reading is done visually using a camera. The robot makes an estimate of the population density of the images, and acts according to this information. The operation of the camera is done with a custom firmware, reducing the complexity of the node without loss of performance. It was noted the route planning and the collective behavior of robots without the use of any other external or local communication. Neither was it necessary to develop a model system, precise state estimation or state feedback.

  10. Avoiding object by robot using neural network

    International Nuclear Information System (INIS)

    Prasetijo, D.W.

    1997-01-01

    A Self controlling robot is necessary in the robot application in which operator control is difficult. Serial method such as process on the computer of van newman is difficult to be applied for self controlling robot. In this research, Neural network system for robotic control system was developed by performance expanding at the SCARA. In this research, it was shown that SCARA with application at Neural network system can avoid blocking objects without influence by number and density of the blocking objects, also departure and destination paint. robot developed by this study also can control its moving by self

  11. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  12. Cooperative robots and sensor networks 2014

    CERN Document Server

    Khelil, Abdelmajid

    2014-01-01

    This book is the second volume on Cooperative Robots and Sensor Networks. The primary objective of this book is to provide an up-to-date reference for cutting-edge studies and research trends related to mobile robots and wireless sensor networks, and in particular for the coupling between them. Indeed, mobile robots and wireless sensor networks have enabled great potentials and a large space for ubiquitous and pervasive applications. Robotics and wireless sensor networks have mostly been considered as separate research fields and little work has investigated the marriage between these two technologies. However, these two technologies share several features, enable common cyber-physical applications and provide complementary support to each other. The book consists of ten chapters, organized into four parts. The first part of the book presents three chapters related to localization of mobile robots using wireless sensor networks. Two chapters presented new solutions based Extended Kalman Filter and Particle Fi...

  13. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  14. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm

  15. Cooperative robots and sensor networks 2015

    CERN Document Server

    Dios, JRamiro

    2015-01-01

    This book compiles some of the latest research in cooperation between robots and sensor networks. Structured in twelve chapters, this book addresses fundamental, theoretical, implementation and experimentation issues. The chapters are organized into four parts namely multi-robots systems, data fusion and localization, security and dependability, and mobility.

  16. Robot calibration with a photogrammetric on-line system using reseau scanning cameras

    Science.gov (United States)

    Diewald, Bernd; Godding, Robert; Henrich, Andreas

    1994-03-01

    The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.

  17. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  18. Depth camera driven mobile robot for human localization and following

    DEFF Research Database (Denmark)

    Skordilis, Nikolaos; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2014-01-01

    In this paper the design and the development of a mobile robot able to locate and then follow a human target is described. Both the integration of the required mechatronics components and the development of appropriate software are covered. The main sensor of the developed mobile robot is an RGB-...

  19. Displacement and deformation measurement for large structures by camera network

    Science.gov (United States)

    Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu

    2014-03-01

    A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.

  20. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  1. Performance analysis for gait in camera networks

    OpenAIRE

    Michela Goffredo; Imed Bouchrika; John Carter; Mark Nixon

    2008-01-01

    This paper deploys gait analysis for subject identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real...

  2. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  3. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  4. Adaptive Synchronization of Robotic Sensor Networks

    OpenAIRE

    Yıldırım, Kasım Sinan; Gürcan, Önder

    2014-01-01

    The main focus of recent time synchronization research is developing power-efficient synchronization methods that meet pre-defined accuracy requirements. However, an aspect that has been often overlooked is the high dynamics of the network topology due to the mobility of the nodes. Employing existing flooding-based and peer-to-peer synchronization methods, are networked robots still be able to adapt themselves and self-adjust their logical clocks under mobile network dynamics? In this paper, ...

  5. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  6. INDUSTRIAL ROBOT REPEATABILITY TESTING WITH HIGH SPEED CAMERA PHANTOM V2511

    Directory of Open Access Journals (Sweden)

    Jerzy Józwik

    2016-12-01

    Full Text Available Apart from accuracy, one of the parameters describing industrial robots is positioning accuracy. The parameter in question, which is the subject of this paper, is often the decisive factor determining whether to apply a given robot to perform certain tasks or not. Articulated robots are predominantly used in such processes as: spot weld-ing, transport of materials and other welding applications, where high positioning repeatability is required. It is therefore essential to recognise the parameter in question and to control it throughout the operation of the robot. This paper presents methodology for robot positioning accuracy measurements based on vision technique. The measurements were conducted with Phantom v2511 high-speed camera and TEMA Motion software, for motion analysis. The object of the measurements was a 6-axis Yaskawa Motoman HP20F industrial robot. The results of measurements obtained in tests provided data for the calculation of positioning accuracy of the robot, which was then juxtaposed against robot specifications. Also analysed was the impact of the direction of displacement on the value of attained pose errors. Test results are given in a graphic form.

  7. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    International Nuclear Information System (INIS)

    Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol

    2013-01-01

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner

  8. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  9. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  10. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  11. Robotic Astronomy and the BOOTES Network of Robotic Telescopes

    Directory of Open Access Journals (Sweden)

    A. J. Castro-Tirado

    2011-01-01

    Full Text Available The Burst Observer and Optical Transient Exploring System (BOOTES, started in 1998 as a Spanish-Czech collaboration project, devoted to a study of optical emissions from gamma ray bursts (GRBs that occur in the Universe. The first two BOOTES stations were located in Spain, and included medium size robotic telescopes with CCD cameras at the Cassegrain focus as well as all-sky cameras, with the two stations located 240 km apart. The first observing station (BOOTES-1 is located at ESAt (INTA-CEDEA in Mazag´on (Huelva and the first light was obtained in July 1998. The second observing station (BOOTES-2 is located at La Mayora (CSIC in M´alaga and has been operating fully since July 2001. In 2009 BOOTES expanded abroad, with the third station (BOOTES-3 being installed in Blenheim (South Island, New Zealand as result of a collaboration project with several institutions from the southern hemisphere. The fourth station (BOOTES-4 is on its way, to be deployed in 2011.

  12. A real-time networked camera system : a scheduled distributed camera system reduces the latency

    NARCIS (Netherlands)

    Karatoy, H.

    2012-01-01

    This report presents the results of a Real-time Networked Camera System, com-missioned by the SAN Group in TU/e. Distributed Systems are motivated by two reasons, the first reason is the physical environment as a requirement and the second reason is to provide a better Quality of Service (QoS). This

  13. Calibration of robot tool centre point using camera-based system

    Directory of Open Access Journals (Sweden)

    Gordić Zaviša

    2016-01-01

    Full Text Available Robot Tool Centre Point (TCP calibration problem is of great importance for a number of industrial applications, and it is well known both in theory and in practice. Although various techniques have been proposed for solving this problem, they mostly require tool jogging or long processing time, both of which affect process performance by extending cycle time. This paper presents an innovative way of TCP calibration using a set of two cameras. The robot tool is placed in an area where images in two orthogonal planes are acquired using cameras. Using robust pattern recognition, even deformed tool can be identified on images, and information about its current position and orientation forwarded to control unit for calibration. Compared to other techniques, test results show significant reduction in procedure complexity and calibration time. These improvements enable more frequent TCP checking and recalibration during production, thus improving the product quality.

  14. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  15. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  16. Handling uncertainty and networked structure in robot control

    CERN Document Server

    Tamás, Levente

    2015-01-01

    This book focuses on two challenges posed in robot control by the increasing adoption of robots in the everyday human environment: uncertainty and networked communication. Part I of the book describes learning control to address environmental uncertainty. Part II discusses state estimation, active sensing, and complex scenario perception to tackle sensing uncertainty. Part III completes the book with control of networked robots and multi-robot teams. Each chapter features in-depth technical coverage and case studies highlighting the applicability of the techniques, with real robots or in simulation. Platforms include mobile ground, aerial, and underwater robots, as well as humanoid robots and robot arms. Source code and experimental data are available at http://extras.springer.com. The text gathers contributions from academic and industry experts, and offers a valuable resource for researchers or graduate students in robot control and perception. It also benefits researchers in related areas, such as computer...

  17. STRAY DOG DETECTION IN WIRED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    C. Prashanth

    2013-08-01

    Full Text Available Existing surveillance systems impose high level of security on humans but lacks attention on animals. Stray dogs could be used as an alternative to humans to carry explosive material. It is therefore imperative to ensure the detection of stray dogs for necessary corrective action. In this paper, a novel composite approach to detect the presence of stray dogs is proposed. The captured frame from the surveillance camera is initially pre-processed using Gaussian filter to remove noise. The foreground object of interest is extracted utilizing ViBe algorithm. Histogram of Oriented Gradients (HOG algorithm is used as the shape descriptor which derives the shape and size information of the extracted foreground object. Finally, stray dogs are classified from humans using a polynomial Support Vector Machine (SVM of order 3. The proposed composite approach is simulated in MATLAB and OpenCV. Further it is validated with real time video feeds taken from an existing surveillance system. From the results obtained, it is found that a classification accuracy of about 96% is achieved. This encourages the utilization of the proposed composite algorithm in real time surveillance systems.

  18. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  19. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    Science.gov (United States)

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M.; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T.J.

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted. PMID:22294927

  20. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    Directory of Open Access Journals (Sweden)

    Aníbal Ollero

    2010-03-01

    Full Text Available In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites, a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted.

  1. Visual guidance of a pig evisceration robot using neural networks

    DEFF Research Database (Denmark)

    Christensen, S.S.; Andersen, A.W.; Jørgensen, T.M.

    1996-01-01

    The application of a RAM-based neural network to robot vision is demonstrated for the guidance of a pig evisceration robot. Tests of the combined robot-vision system have been performed at an abattoir. The vision system locates a set of feature points on a pig carcass and transmits the 3D coordin...

  2. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  3. A Review on Sensor Network Issues and Robotics

    Directory of Open Access Journals (Sweden)

    Ji Hyoung Ryu

    2015-01-01

    Full Text Available The interaction of distributed robotics and wireless sensor networks has led to the creation of mobile sensor networks. There has been an increasing interest in building mobile sensor networks and they are the favored class of WSNs in which mobility plays a key role in the execution of an application. More and more researches focus on development of mobile wireless sensor networks (MWSNs due to its favorable advantages and applications. In WSNs robotics can play a crucial role, and integrating static nodes with mobile robots enhances the capabilities of both types of devices and enables new applications. In this paper we present an overview on mobile sensor networks in robotics and vice versa and robotic sensor network applications.

  4. Estimation of visual maps with a robot network equipped with vision sensors.

    Science.gov (United States)

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  5. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    Directory of Open Access Journals (Sweden)

    Arturo Gil

    2010-05-01

    Full Text Available In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  6. An Inexpensive Method for Kinematic Calibration of a Parallel Robot by Using One Hand-Held Camera as Main Sensor

    Directory of Open Access Journals (Sweden)

    Ricardo Carelli

    2013-08-01

    Full Text Available This paper presents a novel method for the calibration of a parallel robot, which allows a more accurate configuration instead of a configuration based on nominal parameters. It is used, as the main sensor with one camera installed in the robot hand that determines the relative position of the robot with respect to a spherical object fixed in the working area of the robot. The positions of the end effector are related to the incremental positions of resolvers of the robot motors. A kinematic model of the robot is used to find a new group of parameters, which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and thereby improve spatial measurements. Finally, several working tests, static and tracking tests are executed in order to verify how the robotic system behaviour improves by using calibrated parameters against nominal parameters. In order to emphasize that, this proposed new method uses neither external nor expensive sensor. That is why new robots are useful in teaching and research activities.

  7. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  8. People detection and tracking using RGB-D cameras for mobile robots

    Directory of Open Access Journals (Sweden)

    Hengli Liu

    2016-09-01

    Full Text Available People detection and tracking is an essential capability for mobile robots in order to achieve natural human–robot interaction. In this article, a human detection and tracking system is designed and validated for mobile robots using color data with depth information RGB-depth (RGB-D cameras. The whole framework is composed of human detection, tracking and re-identification. Firstly, ground points and ceiling planes are removed to reduce computation effort. A prior-knowledge guided random sample consensus fitting algorithm is used to detect the ground plane and ceiling points. All left points are projected onto the ground plane and subclusters are segmented for candidate detection. Meanshift clustering with an Epanechnikov kernel is conducted to partition different points into subclusters. We propose the new idea of spatial region of interest plan view maps which are employed to identify human candidates from point cloud subclusters. Here, a depth-weighted histogram is extracted online to feature a human candidate. Then, a particle filter algorithm is adopted to track the human’s motion. The integration of the depth-weighted histogram and particle filter provides a precise tool to track the motion of human objects. Finally, data association is set up to re-identify humans who are tracked. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  9. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  10. Detecting TLEs using a massive all-sky camera network

    Science.gov (United States)

    Garnung, M. B.; Celestin, S. J.

    2017-12-01

    Transient Luminous Events (TLEs) are large-scale optical events occurring in the upper-atmosphere from the top of thunderclouds up to the ionosphere. TLEs may have important effects in local, regional, and global scales, and many features of TLEs are not fully understood yet [e.g, Pasko, JGR, 115, A00E35, 2010]. Moreover, meteor events have been suggested to play a role in sprite initiation by producing ionospheric irregularities [e.g, Qin et al., Nat. Commun., 5, 3740, 2014]. The French Fireball Recovery and InterPlanetary Observation Network (FRIPON, https://www.fripon.org/?lang=en), is a national all-sky 30 fps camera network designed to continuously detect meteor events. We seek to make use of this network to observe TLEs over unprecedented space and time scales ( 1000×1000 km with continuous acquisition). To do so, we had to significantly modify FRIPON's triggering software Freeture (https://github.com/fripon/freeture) while leaving the meteor detection capability uncompromised. FRIPON has a great potential in the study of TLEs. Not only could it produce new results about spatial and time distributions of TLEs over a very large area, it could also be used to validate and complement observations from future space missions such as ASIM (ESA) and TARANIS (CNES). In this work, we present an original image processing algorithm that can detect sprites using all-sky cameras while strongly limiting the frequency of false positives and our ongoing work on sprite triangulation using the FRIPON network.

  11. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    International Nuclear Information System (INIS)

    Cho, Jai Wan; Jeong, Kyung Min

    2012-01-01

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  12. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  13. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Zou Xiaotao

    2009-01-01

    Full Text Available Abstract A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  14. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Xiaotao Zou

    2009-01-01

    Full Text Available A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  15. Collaboration Layer for Robots in Mobile Ad-hoc Networks

    DEFF Research Database (Denmark)

    Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e

    2009-01-01

    In many applications multiple robots in Mobile Ad-hoc Networks are required to collaborate in order to solve a task. This paper shows by proof of concept that a Collaboration Layer can be modelled and designed to handle the collaborative communication, which enables robots in small to medium size...

  16. An Approach to Evaluate Stability for Cable-Based Parallel Camera Robots with Hybrid Tension-Stiffness Properties

    Directory of Open Access Journals (Sweden)

    Huiling Wei

    2015-12-01

    Full Text Available This paper focuses on studying the effect of cable tensions and stiffness on the stability of cable-based parallel camera robots. For this purpose, the tension factor and the stiffness factor are defined, and the expression of stability is deduced. A new approach is proposed to calculate the hybrid-stability index with the minimum cable tension and the minimum singular value. Firstly, the kinematic model of a cable-based parallel camera robot is established. Based on the model, the tensions are solved and a tension factor is defined. In order to obtain the tension factor, an optimization of the cable tensions is carried out. Then, an expression of the system's stiffness is deduced and a stiffness factor is defined. Furthermore, an approach to evaluate the stability of the cable-based camera robots with hybrid tension-stiffness properties is presented. Finally, a typical three-degree-of-freedom cable-based parallel camera robot with four cables is studied as a numerical example. The simulation results show that the approach is both reasonable and effective.

  17. Passivity-based control and estimation in networked robotics

    CERN Document Server

    Hatanaka, Takeshi; Fujita, Masayuki; Spong, Mark W

    2015-01-01

    Highlighting the control of networked robotic systems, this book synthesizes a unified passivity-based approach to an emerging cross-disciplinary subject. Thanks to this unified approach, readers can access various state-of-the-art research fields by studying only the background foundations associated with passivity. In addition to the theoretical results and techniques,  the authors provide experimental case studies on testbeds of robotic systems  including networked haptic devices, visual robotic systems,  robotic network systems and visual sensor network systems. The text begins with an introduction to passivity and passivity-based control together with the other foundations needed in this book. The main body of the book consists of three parts. The first examines how passivity can be utilized for bilateral teleoperation and demonstrates the inherent robustness of the passivity-based controller against communication delays. The second part emphasizes passivity’s usefulness for visual feedback control ...

  18. Practical Stabilization of Uncertain Nonholonomic Mobile Robots Based on Visual Servoing Model with Uncalibrated Camera Parameters

    Directory of Open Access Journals (Sweden)

    Hua Chen

    2013-01-01

    Full Text Available The practical stabilization problem is addressed for a class of uncertain nonholonomic mobile robots with uncalibrated visual parameters. Based on the visual servoing kinematic model, a new switching controller is presented in the presence of parametric uncertainties associated with the camera system. In comparison with existing methods, the new design method is directly used to control the original system without any state or input transformation, which is effective to avoid singularity. Under the proposed control law, it is rigorously proved that all the states of closed-loop system can be stabilized to a prescribed arbitrarily small neighborhood of the zero equilibrium point. Furthermore, this switching control technique can be applied to solve the practical stabilization problem of a kind of mobile robots with uncertain parameters (and angle measurement disturbance which appeared in some literatures such as Morin et al. (1998, Hespanha et al. (1999, Jiang (2000, and Hong et al. (2005. Finally, the simulation results show the effectiveness of the proposed controller design approach.

  19. RoboSmith: Wireless Networked Architecture for Multiagent Robotic System

    Directory of Open Access Journals (Sweden)

    Florin Moldoveanu

    2010-11-01

    Full Text Available In this paper is presented an architecture for a flexible mini robot for a multiagent robotic system. In a multiagent system the value of an individual agent is negligible since the goal of the system is essential. Thus, the agents (robots need to be small, low cost and cooperative. RoboSmith are designed based on these conditions. The proposed architecture divide a robot into functional modules such as locomotion, control, sensors, communication, and actuation. Any mobile robot can be constructed by combining these functional modules for a specific application. An embedded software with dynamic task uploading and multi-tasking abilities is developed in order to create better interface between robots and the command center and among the robots. The dynamic task uploading allows the robots change their behaviors in runtime. The flexibility of the robots is given by facts that the robots can work in multiagent system, as master-slave, or hybrid mode, can be equipped with different modules and possibly be used in other applications such as mobile sensor networks remote sensing, and plant monitoring.

  20. NRES: The Network of Robotic Echelle Spectrographs

    Science.gov (United States)

    Siverd, Robert; Brown, Tim; Henderson, Todd; Hygelund, John; Barnes, Stuart; de Vera, Jon; Eastman, Jason; Kirby, Annie; Smith, Cary; Taylor, Brook; Tufts, Joseph; van Eyken, Julian

    2018-01-01

    Las Cumbres Observatory (LCO) is building the Network of Robotic Echelle Spectrographs (NRES), which will consist of four (up to six in the future) identical, optical (390 - 860 nm) high-precision spectrographs, each fiber-fed simultaneously by up to two 1-meter telescopes and a Thorium-Argon calibration source. We plan to install one at up to 6 observatory sites in the Northern and Southern hemispheres, creating a single, globally-distributed, autonomous spectrograph facility using up to ten 1-m telescopes. Simulations suggest we will achieve long-term radial velocity precision of 3 m/s in less than an hour for stars brighter than V = 11 or 12 once the system reaches full capability. Acting in concert, these four spectrographs will provide a new, unique facility for stellar characterization and precise radial velocities.Following a few months of on-sky evaluation at our BPL test facility, the first spectrograph unit was shipped to CTIO in late 2016 and installed in March 2017. After several more months of additional testing and commissioning, regular science operations began with this node in September 2017. The second NRES spectrograph was installed at McDonald Observatory in September 2017 and released to the network after its own brief commissioning period, extending spectroscopic capability to the Northern hemisphere. The third NRES spectrograph was installed at SAAO in November 2017 and released to our science community just before year's end. The fourth NRES unit shipped in October and is currently en route to Wise Observatory in Israel with an expected release to the science community in early 2018.We will briefly overview the LCO telescope network, the NRES spectrograph design, the advantages it provides, and development challenges we encountered along the way. We will further discuss real-world performance from our first three units, initial science results, and the ongoing software development effort needed to automate such a facility for a wide array of

  1. Biologically based neural network for mobile robot navigation

    Science.gov (United States)

    Torres Muniz, Raul E.

    1999-01-01

    The new tendency in mobile robots is to crete non-Cartesian system based on reactions to their environment. This emerging technology is known as Evolutionary Robotics, which is combined with the Biorobotic field. This new approach brings cost-effective solutions, flexibility, robustness, and dynamism into the design of mobile robots. It also provides fast reactions to the sensory inputs, and new interpretation of the environment or surroundings of the mobile robot. The Subsumption Architecture (SA) and the action selection dynamics developed by Brooks and Maes, respectively, have successfully obtained autonomous mobile robots initiating this new trend of the Evolutionary Robotics. Their design keeps the mobile robot control simple. This work present a biologically inspired modification of these schemes. The hippocampal-CA3-based neural network developed by Williams Levy is used to implement the SA, while the action selection dynamics emerge from iterations of the levels of competence implemented with the HCA3. This replacement by the HCA3 results in a closer biological model than the SA, combining the Behavior-based intelligence theory with neuroscience. The design is kept simple, and it is implemented in the Khepera Miniature Mobile Robot. The used control scheme obtains an autonomous mobile robot that can be used to execute a mail delivery system and surveillance task inside a building floor.

  2. A proposal of decontamination robot using 3D hand-eye-dual-cameras solid recognition and accuracy validation

    International Nuclear Information System (INIS)

    Minami, Mamoru; Nishimura, Kenta; Sunami, Yusuke; Yanou, Akira; Yu, Cui; Yamashita, Manabu; Ishiyama, Shintaro

    2015-01-01

    New robotic system that uses three dimensional measurement with solid object recognition —3D-MOS (Three Dimensional Move on Sensing)— based on visual servoing technology was designed and the on-board hand-eye-dual-cameras robot system has been developed to reduce risks of radiation exposure during decontamination processes by filter press machine that solidifies and reduces the volume of irradiation contaminated soil. The feature of 3D-MoS includes; (1) the both hand-eye-dual-cameras take the images of target object near the intersection of both lenses' centerlines, (2) the observation at intersection enables both cameras can see target object almost at the center of both images, (3) then it brings benefits as reducing the effect of lens aberration and improving the detection accuracy of three dimensional position. In this study, accuracy validation test of interdigitation of the robot's hand into filter cloth rod of the filter press —the task is crucial for the robot to remove the contaminated cloth from the filter press machine automatically and for preventing workers from exposing to radiation—, was performed. Then the following results were derived; (1) the 3D-MoS controlled robot could recognize the rod at arbitrary position within designated space, and all of insertion test were carried out successfully and, (2) test results also demonstrated that the proposed control guarantees that interdigitation clearance between the rod and robot hand can be kept within 1.875[mm] with standard deviation being 0.6[mm] or less. (author)

  3. Inverse kinematics problem in robotics using neural networks

    Science.gov (United States)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  4. Autonomous Deployment and Restoration of Sensor Network using Mobile Robots

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Suzuki

    2010-09-01

    Full Text Available This paper describes an autonomous deployment and restoration of a Wireless Sensor Network (WSN using mobile robots. The authors have been developing an information-gathering system using mobile robots and WSNs in underground spaces in post-disaster environments. In our system, mobile robots carry wireless sensor nodes (SN and deploy them into the environment while measuring Received Signal Strength Indication (RSSI values to ensure communication, thereby enabling the WSN to be deployed and restored autonomously. If the WSN is disrupted, mobile robots restore the communication route by deploying additional or alternate SNs to suitable positions. Utilizing the proposed method, a mobile robot can deploy a WSN and gather environmental information via the WSN. Experimental results using a verification system equipped with a SN deployment and retrieval mechanism are presented.

  5. Autonomous Deployment and Restoration of Sensor Network using Mobile Robots

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Suzuki

    2010-06-01

    Full Text Available This paper describes an autonomous deployment and restoration of a Wireless Sensor Network (WSN using mobile robots. The authors have been developing an information-gathering system using mobile robots and WSNs in underground spaces in post-disaster environments. In our system, mobile robots carry wireless sensor nodes (SN and deploy them into the environment while measuring Received Signal Strength Indication (RSSI values to ensure communication, thereby enabling the WSN to be deployed and restored autonomously. If the WSN is disrupted, mobile robots restore the communication route by deploying additional or alternate SNs to suitable positions. Utilizing the proposed method, a mobile robot can deploy a WSN and gather environmental information via the WSN. Experimental results using a verification system equipped with a SN deployment and retrieval mechanism are presented.

  6. SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)

    Science.gov (United States)

    Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter

    2016-08-01

    The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This

  7. Four Degree Freedom Robot Arm with Fuzzy Neural Network Control

    Directory of Open Access Journals (Sweden)

    Şinasi Arslan

    2013-01-01

    Full Text Available In this study, the control of four degree freedom robot arm has been realized with the computed torque control method.. It is usually required that the four jointed robot arm has high precision capability and good maneuverability for using in industrial applications. Besides, high speed working and external applied loads have been acting as important roles. For those purposes, the computed torque control method has been developed in a good manner that the robot arm can track the given trajectory, which has been able to enhance the feedback control together with fuzzy neural network control. The simulation results have proved that the computed torque control with the neural network has been so successful in robot control.

  8. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2018-05-01

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  9. Fractal gene regulatory networks for robust locomotion control of modular robots

    DEFF Research Database (Denmark)

    Zahadat, Payam; Christensen, David Johan; Schultz, Ulrik Pagh

    2010-01-01

    Designing controllers for modular robots is difficult due to the distributed and dynamic nature of the robots. In this paper fractal gene regulatory networks are evolved to control modular robots in a distributed way. Experiments with different morphologies of modular robot are performed and the ......Designing controllers for modular robots is difficult due to the distributed and dynamic nature of the robots. In this paper fractal gene regulatory networks are evolved to control modular robots in a distributed way. Experiments with different morphologies of modular robot are performed...

  10. MAHLI on Mars: lessons learned operating a geoscience camera on a landed payload robotic arm

    Science.gov (United States)

    Aileen Yingst, R.; Edgett, Kenneth S.; Kennedy, Megan R.; Krezoski, Gillian M.; McBride, Marie J.; Minitti, Michelle E.; Ravine, Michael A.; Williams, Rebecca M. E.

    2016-06-01

    The Mars Hand Lens Imager (MAHLI) is a 2-megapixel, color camera with resolution as high as 13.9 µm pixel-1. MAHLI has operated successfully on the Martian surface for over 1150 Martian days (sols) aboard the Mars Science Laboratory (MSL) rover, Curiosity. During that time MAHLI acquired images to support science and science-enabling activities, including rock and outcrop textural analysis; sand characterization to further the understanding of global sand properties and processes; support of other instrument observations; sample extraction site documentation; range-finding for arm and instrument placement; rover hardware and instrument monitoring and safety; terrain assessment; landscape geomorphology; and support of rover robotic arm commissioning. Operation of the instrument has demonstrated that imaging fully illuminated, dust-free targets yields the best results, with complementary information obtained from shadowed images. The light-emitting diodes (LEDs) allow satisfactory night imaging but do not improve daytime shadowed imaging. MAHLI's combination of fine-scale, science-driven resolution, RGB color, the ability to focus over a large range of distances, and relatively large field of view (FOV), have maximized the return of science and science-enabling observations given the MSL mission architecture and constraints.

  11. A Proposal for Automatic Fruit Harvesting by Combining a Low Cost Stereovision Camera and a Robotic Arm

    Science.gov (United States)

    Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi

    2014-01-01

    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions. PMID:24984059

  12. A Proposal for Automatic Fruit Harvesting by Combining a Low Cost Stereovision Camera and a Robotic Arm

    Directory of Open Access Journals (Sweden)

    Davinia Font

    2014-06-01

    Full Text Available This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions.

  13. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    Directory of Open Access Journals (Sweden)

    Yongcheng Li

    Full Text Available We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning. Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  14. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    Science.gov (United States)

    Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  15. Neural-Network Control Of Prosthetic And Robotic Hands

    Science.gov (United States)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  16. Low Cost Wireless Network Camera Sensors for Traffic Monitoring

    Science.gov (United States)

    2012-07-01

    Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...

  17. Sedimentological Investigations of the Martian Surface using the Mars 2001 Robotic Arm Camera and MECA Optical Microscope

    Science.gov (United States)

    Rice, J. W., Jr.; Smith, P. H.; Marshall, J. R.

    1999-01-01

    The first microscopic sedimentological studies of the Martian surface will commence with the landing of the Mars Polar Lander (MPL) December 3, 1999. The Robotic Arm Camera (RAC) has a resolution of 25 um/p which will permit detailed micromorphological analysis of surface and subsurface materials. The Robotic Ann will be able to dig up to 50 cm below the surface. The walls of the trench will also be inspected by RAC to look for evidence of stratigraphic and / or sedimentological relationships. The 2001 Mars Lander will build upon and expand the sedimentological research begun by the RAC on MPL. This will be accomplished by: (1) Macroscopic (dm to cm): Descent Imager, Pancam, RAC; (2) Microscopic (mm to um RAC, MECA Optical Microscope (Figure 2), AFM This paper will focus on investigations that can be conducted by the RAC and MECA Optical Microscope.

  18. Remote Lab for Robotics Applications

    Directory of Open Access Journals (Sweden)

    Robinson Jiménez

    2018-01-01

    Full Text Available This article describes the development of a remote lab environment used to test and training sessions for robotics tasks. This environment is made up of the components and devices based on two robotic arms, a network link, Arduino card and Arduino shield for Ethernet, as well as an IP camera. The remote laboratory is implemented to perform remote control of the robotic arms with visual feedback by camera, of the robots actions, where, with a group of test users, it was possible to obtain performance ranges in tasks of telecontrol of up to 92%.

  19. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  20. Collaboration Layer for Robots in Mobile Ad-hoc Networks

    DEFF Research Database (Denmark)

    Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e

    2009-01-01

    networks to solve tasks collaboratively. In this proposal the Collaboration Layer is modelled to handle service and position discovery, group management, and synchronisation among robots, but the layer is also designed to be extendable. Based on this model of the Collaboration Layer, generic services...... are provided to the application running on the robot. The services are generic because they can be used by many different applications, independent of the task to be solved. Likewise, specific services are requested from the underlying Virtual Machine, such as broadcast, multicast, and reliable unicast....... A prototype of the Collaboration Layer has been developed to run in a simulated environment and tested in an evaluation scenario. In the scenario five robots solve the tasks of vacuum cleaning and entrance guarding, which involves the ability to discover potential co-workers, form groups, shift from one group...

  1. Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration

    Science.gov (United States)

    Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...

  2. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dong Seop Kim

    2018-03-01

    Full Text Available Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR open database, show that our method outperforms previous works.

  3. A framework for multi-object tracking over distributed wireless camera networks

    Science.gov (United States)

    Gau, Victor; Hwang, Jenq-Neng

    2010-07-01

    In this paper, we propose a unified framework targeting at two important issues in a distributed wireless camera network, i.e., object tracking and network communication, to achieve reliable multi-object tracking over distributed wireless camera networks. In the object tracking part, we propose a fully automated approach for tracking of multiple objects across multiple cameras with overlapping and non-overlapping field of views without initial training. To effectively exchange the tracking information among the distributed cameras, we proposed an idle probability based broadcasting method, iPro, which adaptively adjusts the broadcast probability to improve the broadcast effectiveness in a dense saturated camera network. Experimental results for the multi-object tracking demonstrate the promising performance of our approach on real video sequences for cameras with overlapping and non-overlapping views. The modeling and ns-2 simulation results show that iPro almost approaches the theoretical performance upper bound if cameras are within each other's transmission range. In more general scenarios, e.g., in case of hidden node problems, the simulation results show that iPro significantly outperforms standard IEEE 802.11, especially when the number of competing nodes increases.

  4. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    Science.gov (United States)

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  5. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images

    Directory of Open Access Journals (Sweden)

    Lingyan Ran

    2017-06-01

    Full Text Available Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN, trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  6. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  7. Optimal Cable Tension Distribution of the High-Speed Redundant Driven Camera Robots Considering Cable Sag and Inertia Effects

    Directory of Open Access Journals (Sweden)

    Yu Su

    2014-03-01

    Full Text Available Camera robots are high-speed redundantly cable-driven parallel manipulators that realize the aerial panoramic photographing. When long-span cables and high maneuverability are involved, the effects of cable sags and inertias on the dynamics must be carefully dealt with. This paper is devoted to the optimal cable tension distribution (OCTD for short of the camera robots. Firstly, each fast varying-length cable is discretized into some nodes for computing the cable inertias. Secondly, the dynamic equation integrated with the cable inertias is set up regarding the large-span cables as catenaries. Thirdly, an iterative optimization algorithm is introduced for the cable tension distribution by using the dynamic equation and sag-to-span ratios as constraint conditions. Finally, numerical examples are presented to demonstrate the effects of cable sags and inertias on determining tensions. The results justify the convergence and effectiveness of the algorithm. In addition, the results show that it is necessary to take the cable sags and inertias into consideration for the large-span manipulators.

  8. Collaborative 3D Target Tracking in Distributed Smart Camera Networks for Wide-Area Surveillance

    Directory of Open Access Journals (Sweden)

    Xenofon Koutsoukos

    2013-05-01

    Full Text Available With the evolution and fusion of wireless sensor network and embedded camera technologies, distributed smart camera networks have emerged as a new class of systems for wide-area surveillance applications. Wireless networks, however, introduce a number of constraints to the system that need to be considered, notably the communication bandwidth constraints. Existing approaches for target tracking using a camera network typically utilize target handover mechanisms between cameras, or combine results from 2D trackers in each camera into 3D target estimation. Such approaches suffer from scale selection, target rotation, and occlusion, drawbacks typically associated with 2D tracking. In this paper, we present an approach for tracking multiple targets directly in 3D space using a network of smart cameras. The approach employs multi-view histograms to characterize targets in 3D space using color and texture as the visual features. The visual features from each camera along with the target models are used in a probabilistic tracker to estimate the target state. We introduce four variations of our base tracker that incur different computational and communication costs on each node and result in different tracking accuracy. We demonstrate the effectiveness of our proposed trackers by comparing their performance to a 3D tracker that fuses the results of independent 2D trackers. We also present performance analysis of the base tracker along Quality-of-Service (QoS and Quality-of-Information (QoI metrics, and study QoS vs. QoI trade-offs between the proposed tracker variations. Finally, we demonstrate our tracker in a real-life scenario using a camera network deployed in a building.

  9. An Evaluation of Camera Pose Methods for an Augmented Reality System: Application to Teaching Industrial Robots

    OpenAIRE

    Maidi , Madjid; Mallem , Malik; Benchikh , Laredj; Otmane , Samir

    2013-01-01

    International audience; In automotive industry, industrial robots are widely used in production lines for many tasks such as welding, painting or assembly. Their use requires, from users, both a good manipulation and robot control. Recently, new tools have been developed to realize fast and accurate trajectories in many production sectors by using the real prototype of vehicle or a generalized design within a virtual simulation platform. However, many issues could be considered in these cases...

  10. Web based educational tool for neural network robot control

    Directory of Open Access Journals (Sweden)

    Jure Čas

    2007-05-01

    Full Text Available Abstract— This paper describes the application for teleoperations of the SCARA robot via the internet. The SCARA robot is used by students of mehatronics at the University of Maribor as a remote educational tool. The developed software consists of two parts i.e. the continuous neural network sliding mode controller (CNNSMC and the graphical user interface (GUI. Application is based on two well-known commercially available software packages i.e. MATLAB/Simulink and LabVIEW. Matlab/Simulink and the DSP2 Library for Simulink are used for control algorithm development, simulation and executable code generation. While this code is executing on the DSP-2 Roby controller and through the analog and digital I/O lines drives the real process, LabVIEW virtual instrument (VI, running on the PC, is used as a user front end. LabVIEW VI provides the ability for on-line parameter tuning, signal monitoring, on-line analysis and via Remote Panels technology also teleoperation. The main advantage of a CNNSMC is the exploitation of its self-learning capability. When friction or an unexpected impediment occurs for example, the user of a remote application has no information about any changed robot dynamic and thus is unable to dispatch it manually. This is not a control problem anymore because, when a CNNSMC is used, any approximation of changed robot dynamic is estimated independently of the remote’s user. Index Terms—LabVIEW; Matlab/Simulink; Neural network control; remote educational tool; robotics

  11. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  12. Contrasting Web Robot and Human Behaviors with Network Models

    OpenAIRE

    Brown, Kyle; Doran, Derek

    2018-01-01

    The web graph is a commonly-used network representation of the hyperlink structure of a website. A network of similar structure to the web graph, which we call the session graph has properties that reflect the browsing habits of the agents in the web server logs. In this paper, we apply session graphs to compare the activity of humans against web robots or crawlers. Understanding these properties will enable us to improve models of HTTP traffic, which can be used to predict and generate reali...

  13. Real-Time Range Sensing Video Camera for Human/Robot Interfacing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  14. A Sliding Mode Control-based on a RBF Neural Network for Deburring Industry Robotic Systems

    OpenAIRE

    Tao, Yong; Zheng, Jiaqi; Lin, Yuanchang

    2016-01-01

    A sliding mode control method based on radial basis function (RBF) neural network is proposed for the deburring of industry robotic systems. First, a dynamic model for deburring the robot system is established. Then, a conventional SMC scheme is introduced for the joint position tracking of robot manipulators. The RBF neural network based sliding mode control (RBFNN-SMC) has the ability to learn uncertain control actions. In the RBFNN-SMC scheme, the adaptive tuning algorithms for network par...

  15. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    Science.gov (United States)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  16. OPTIMAL CAMERA NETWORK DESIGN FOR 3D MODELING OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    B. S. Alsadik

    2012-07-01

    Full Text Available Digital cultural heritage documentation in 3D is subject to research and practical applications nowadays. Image-based modeling is a technique to create 3D models, which starts with the basic task of designing the camera network. This task is – however – quite crucial in practical applications because it needs a thorough planning and a certain level of expertise and experience. Bearing in mind todays computational (mobile power we think that the optimal camera network should be designed in the field, and, therefore, making the preprocessing and planning dispensable. The optimal camera network is designed when certain accuracy demands are fulfilled with a reasonable effort, namely keeping the number of camera shots at a minimum. In this study, we report on the development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues. Starting from a rough point cloud derived from a video stream of object images, the initial configuration of the camera network assuming a high-resolution state-of-the-art non-metric camera is designed. To improve the image coverage and accuracy, we use a mathematical penalty method of optimization with constraints. From the experimental test, we found that, after optimization, the maximum coverage is attained beside a significant improvement of positional accuracy. Currently, we are working on a guiding system, to ensure, that the operator actually takes the desired images. Further next steps will include a reliable and detailed modeling of the object applying sophisticated dense matching techniques.

  17. Tracking Mobile Robot in Indoor Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Liping Zhang

    2014-01-01

    Full Text Available This work addresses the problem of tracking mobile robots in indoor wireless sensor networks (WSNs. Our approach is based on a localization scheme with RSSI (received signal strength indication which is used widely in WSN. The developed tracking system is designed for continuous estimation of the robot’s trajectory. A WSN, which is composed of many very simple and cheap wireless sensor nodes, is deployed at a specific region of interest. The wireless sensor nodes collect RSSI information sent by mobile robots. A range-based data fusion scheme is used to estimate the robot’s trajectory. Moreover, a Kalman filter is designed to improve tracking accuracy. Experiments are provided to assess the performance of the proposed scheme.

  18. Optimizing Double-Network Hydrogel for Biomedical Soft Robots.

    Science.gov (United States)

    Banerjee, Hritwick; Ren, Hongliang

    2017-09-01

    Double-network hydrogel with standardized chemical parameters demonstrates a reasonable and viable alternative to silicone in soft robotic fabrication due to its biocompatibility, comparable mechanical properties, and customizability through the alterations of key variables. The most viable hydrogel sample in our article shows tensile strain of 851% and maximum tensile strength of 0.273 MPa. The elasticity and strength range of this hydrogel can be customized according to application requirements by simple alterations in the recipe. Furthermore, we incorporated Agar/PAM hydrogel into our highly constrained soft pneumatic actuator (SPA) design and eventually produced SPAs with escalated capabilities, such as larger range of motion, higher force output, and power efficiency. Incorporating SPAs made of Agar/PAM hydrogel resulted in low viscosity, thermos-reversibility, and ultralow elasticity, which we believe can help to combine with the other functions of hydrogel, tailoring a better solution for fabricating biocompatible soft robots.

  19. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  20. Robotics

    Science.gov (United States)

    Popov, E. P.; Iurevich, E. I.

    The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.

  1. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  2. Scaling-up camera traps: monitoring the planet's biodiversity with networks of remote sensors

    Science.gov (United States)

    Steenweg, Robin; Hebblewhite, Mark; Kays, Roland; Ahumada, Jorge A.; Fisher, Jason T.; Burton, Cole; Townsend, Susan E.; Carbone, Chris; Rowcliffe, J. Marcus; Whittington, Jesse; Brodie, Jedediah; Royle, Andy; Switalski, Adam; Clevenger, Anthony P.; Heim, Nicole; Rich, Lindsey N.

    2017-01-01

    Countries committed to implementing the Convention on Biological Diversity's 2011–2020 strategic plan need effective tools to monitor global trends in biodiversity. Remote cameras are a rapidly growing technology that has great potential to transform global monitoring for terrestrial biodiversity and can be an important contributor to the call for measuring Essential Biodiversity Variables. Recent advances in camera technology and methods enable researchers to estimate changes in abundance and distribution for entire communities of animals and to identify global drivers of biodiversity trends. We suggest that interconnected networks of remote cameras will soon monitor biodiversity at a global scale, help answer pressing ecological questions, and guide conservation policy. This global network will require greater collaboration among remote-camera studies and citizen scientists, including standardized metadata, shared protocols, and security measures to protect records about sensitive species. With modest investment in infrastructure, and continued innovation, synthesis, and collaboration, we envision a global network of remote cameras that not only provides real-time biodiversity data but also serves to connect people with nature.

  3. Sensor fusion in smart camera networks for ambient Intelligence

    NARCIS (Netherlands)

    Maatta, T.T.

    2013-01-01

    This short report introduces the topics of PhD research that was conducted on 2008-2013 and was defended on July 2013. The PhD thesis covers sensor fusion theory, gathers it into a framework with design rules for fusion-friendly design of vision networks, and elaborates on the rules through fusion

  4. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  5. People identification for domestic non-overlapping RGB-D camera networks

    NARCIS (Netherlands)

    Takac, B.; Rauterberg, G.W.M.; Català, A.; Chen, W.

    2015-01-01

    The ability to identify the specific person in a home camera network is very relevant for healthcare applications where humans need to be observed daily in their living environment. The appearance based people identification in a domestic environment has many similarities with the problem of

  6. A System for Acquisition, Processing and Visualization of Image Time Series from Multiple Camera Networks

    Directory of Open Access Journals (Sweden)

    Cemal Melih Tanis

    2018-06-01

    Full Text Available A system for multiple camera networks is proposed for continuous monitoring of ecosystems by processing image time series. The system is built around the Finnish Meteorological Image PROcessing Toolbox (FMIPROT, which includes data acquisition, processing and visualization from multiple camera networks. The toolbox has a user-friendly graphical user interface (GUI for which only minimal computer knowledge and skills are required to use it. Images from camera networks are acquired and handled automatically according to the common communication protocols, e.g., File Transfer Protocol (FTP. Processing features include GUI based selection of the region of interest (ROI, automatic analysis chain, extraction of ROI based indices such as the green fraction index (GF, red fraction index (RF, blue fraction index (BF, green-red vegetation index (GRVI, and green excess (GEI index, as well as a custom index defined by a user-provided mathematical formula. Analysis results are visualized on interactive plots both on the GUI and hypertext markup language (HTML reports. The users can implement their own developed algorithms to extract information from digital image series for any purpose. The toolbox can also be run in non-GUI mode, which allows running series of analyses in servers unattended and scheduled. The system is demonstrated using an environmental camera network in Finland.

  7. Visual control of a robot manipulator using neural networks

    International Nuclear Information System (INIS)

    Kurazume, Ryo; Sekiguchi, Minoru; Nagata, Shigemi

    1994-01-01

    This paper describes a vision-motor fusion system using neural networks, consisting of multiple vision sensors and a manipulator, for grasping an object placed in a desired position and attitude in a three-dimensional workspace. The system does not need complicated vision sensor calibration and calculation of a transformation matrix, and can thus be easily constructed for grasping tasks. An experimental system with two TV cameras and a manipulator with six degrees of freedom grasped a connector suspended in a three-dimensional workspace with high accuracy. (author)

  8. Kinematic Analysis of 3-DOF Planer Robot Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Jolly Atit Shah

    2012-07-01

    Full Text Available Automatic control of the robotic manipulator involves study of kinematics and dynamics as a major issue. This paper involves the forward and inverse kinematics of 3-DOF robotic manipulator with revolute joints. In this study the Denavit- Hartenberg (D-H model is used to model robot links and joints. Also forward and inverse kinematics solution has been achieved using Artificial Neural Networks for 3-DOF robotic manipulator. It shows that by using artificial neural network the solution we get is faster, acceptable and has zero error.

  9. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  10. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  11. Recursive 3D-reconstruction of structured scenes using a moving camera - application to robotics

    International Nuclear Information System (INIS)

    Boukarri, Bachir

    1989-01-01

    This thesis is devoted to the perception of a structured environment, and proposes a new method which allows a 3D-reconstruction of an interesting part of the world using a mobile camera. Our work is divided into three essential parts dedicated to 2D-information aspect, 3D-information aspect, and a validation of the method. In the first part, we present a method which produces a topologic and geometric image representation based on 'segment' and 'junction' features. Then, a 2D-matching method based on a hypothesis prediction and verification algorithm is proposed to match features issued from two successive images. The second part deals with 3D-reconstruction using a triangulation technique, and discuses our new method introducing an 'Estimation-Construction-Fusion' process. This ensures a complete and accurate 3D-representation, and a permanent position estimation of the camera with respect to the model. The merging process allows refinement of the 3D-representation using a powerful tool: a Kalman Filter. In the last part, experimental results issued from simulated and real data images are reported to show the efficiency of the method. (author) [fr

  12. LCOGT: A World-Wide Network of Robotic Telescopes

    Science.gov (United States)

    Brown, T.

    2013-05-01

    Las Cumbres Observatory Global Telescope (LCOGT) is an organization dedicated to time-domain astronomy. To carry out the necessary observations in fields such as supernovae, extrasolar planets, small solar-system bodies, and pulsating stars, we have developed and are now deploying a set of robotic optical telescopes at sites around the globe. In this talk I will concentrate on the core of this network, consisting of up to 15 identical 1m telescopes deployed across multiple sites in both the northern and southern hemispheres. I will summarize the technical and performance aspect of these telescopes, including both their imaging and their anticipated spectroscopic capabilities. But I will also delve into the network organization, including communication among telescopes (to assure that observations are properly carried out), interactions among the institutions and scientists who will use the network (to optimize the scientific returns), and our funding model (which until now has relied entirely on one private donor, but will soon require funding from outside sources, if the full potential of the network is to be achieved).

  13. Robot Towed Shortwave Infrared Camera for Specific Surface Area Retrieval of Surface Snow

    Science.gov (United States)

    Elliott, J.; Lines, A.; Ray, L.; Albert, M. R.

    2017-12-01

    Optical grain size and specific surface area are key parameters for measuring the atmospheric interactions of snow, as well as tracking metamorphosis and allowing for the ground truthing of remote sensing data. We describe a device using a shortwave infrared camera with changeable optical bandpass filters (centered at 1300 nm and 1550 nm) that can be used to quickly measure the average SSA over an area of 0.25 m^2. The device and method are compared with calculations made from measurements taken with a field spectral radiometer. The instrument is designed to be towed by a small autonomous ground vehicle, and therefore rides above the snow surface on ultra high molecular weight polyethylene (UHMW) skis.

  14. HYBRID COMMUNICATION NETWORK OF MOBILE ROBOT AND QUAD-COPTER

    Directory of Open Access Journals (Sweden)

    Moustafa M. Kurdi

    2017-01-01

    Full Text Available This paper introduces the design and development of QMRS (Quadcopter Mobile Robotic System. QMRS is a real-time obstacle avoidance capability in Belarus-132N mobile robot with the cooperation of quadcopter Phantom-4. The function of QMRS consists of GPS used by Mobile Robot and image vision and image processing system from both robot and quad-copter and by using effective searching algorithm embedded inside the robot. Having the capacity to navigate accurately is one of the major abilities of a mobile robot to effectively execute a variety of jobs including manipulation, docking, and transportation. To achieve the desired navigation accuracy, mobile robots are typically equipped with on-board sensors to observe persistent features in the environment, to estimate their pose from these observations, and to adjust their motion accordingly. Quadcopter takes off from Mobile Robot, surveys the terrain and transmits the processed Image terrestrial robot. The main objective of research paper is to focus on the full coordination between robot and quadcopter by designing an efficient wireless communication using WIFI. In addition, it identify the method involving the use of vision and image processing system from both robot and quadcopter; analyzing path in real-time and avoiding obstacles based-on the computational algorithm embedded inside the robot. QMRS increases the efficiency and reliability of the whole system especially in robot navigation, image processing and obstacle avoidance due to the help and connection among the different parts of the system.

  15. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  16. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  17. Neural Network Based Reactive Navigation for Mobile Robot in Dynamic Environment

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, S.; Ripel, T.

    2013-01-01

    Roč. 198, č. 2013 (2013), s. 108-113 ISSN 1012-0394 Institutional research plan: CEZ:AV0Z20760514 Institutional support: RVO:61388998 Keywords : mobile robot * reactive navigation * artificial neural networks Subject RIV: JD - Computer Applications, Robotics

  18. Airborne particle monitoring with urban closed-circuit television camera networks and a chromatic technique

    International Nuclear Information System (INIS)

    Kolupula, Y R; Jones, G R; Deakin, A G; Spencer, J W; Aceves-Fernandez, M A

    2010-01-01

    An economic approach for the preliminary assessment of 2–10 µm sized (PM10) airborne particle levels in urban areas is described. It uses existing urban closed-circuit television (CCTV) surveillance camera networks in combination with particle accumulating units and chromatic quantification of polychromatic light scattered by the captured particles. Methods for accommodating extraneous light effects are discussed and test results obtained from real urban sites are presented to illustrate the potential of the approach

  19. Reconfigurable FPGA architecture for computer vision applications in Smart Camera Networks

    OpenAIRE

    Maggiani , Luca; Salvadori , Claudio; Petracca , Matteo; Pagano , Paolo; Saletti , Roberto

    2013-01-01

    International audience; Smart Camera Networks (SCNs) is nowadays an emerging research field which represents the natural evolution of centralized computer vision applications towards full distributed and pervasive systems. In such a scenario, one of the biggest effort is in the definition of a flexible and reconfigurable SCN node architecture able to remotely support the possibility of updating the application parameters and changing the running computer vision applications at run-time. In th...

  20. Networked web-cameras monitor congruent seasonal development of birches with phenological field observations

    Science.gov (United States)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali

    2017-04-01

    Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for

  1. Robotic platform for traveling on vertical piping network

    Science.gov (United States)

    Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel; Marzolf, Athneal D

    2015-02-03

    This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.

  2. Secure Chaotic Map Based Block Cryptosystem with Application to Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Muhammad Khurram Khan

    2011-01-01

    Full Text Available Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network.

  3. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    Science.gov (United States)

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  4. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Jong Hyun Kim

    2017-05-01

    Full Text Available Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1 and two open databases (Korea advanced institute of science and technology (KAIST and computer vision center (CVC databases, as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  5. Sulfates, Clouds and Radiation Brazil (SCAR-B) AERONET (AErosol RObotic NETwork) Data

    Data.gov (United States)

    National Aeronautics and Space Administration — SCAR_B_AERONET data are Smoke, Clouds and Radiation Brazil (SCARB) Aerosol Robotic Network (AERONET) data for aerosol characterization.Smoke/Sulfates, Clouds and...

  6. Experiments in Neural-Network Control of a Free-Flying Space Robot

    National Research Council Canada - National Science Library

    Wilson, Edward

    1995-01-01

    Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype...

  7. A Sliding Mode Control-Based on a RBF Neural Network for Deburring Industry Robotic Systems

    Directory of Open Access Journals (Sweden)

    Yong Tao

    2016-01-01

    Full Text Available A sliding mode control method based on radial basis function (RBF neural network is proposed for the deburring of industry robotic systems. First, a dynamic model for deburring the robot system is established. Then, a conventional SMC scheme is introduced for the joint position tracking of robot manipulators. The RBF neural network based sliding mode control (RBFNN-SMC has the ability to learn uncertain control actions. In the RBFNN-SMC scheme, the adaptive tuning algorithms for network parameters are derived by a Koski function algorithm to ensure the network convergences and enacts stable control. The simulations and experimental results of the deburring robot system are provided to illustrate the effectiveness of the proposed RBFNN-SMC control method. The advantages of the proposed RBFNN-SMC method are also evaluated by comparing it to existing control schemes.

  8. Neural Network Observer-Based Finite-Time Formation Control of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Caihong Zhang

    2014-01-01

    Full Text Available This paper addresses the leader-following formation problem of nonholonomic mobile robots. In the formation, only the pose (i.e., the position and direction angle of the leader robot can be obtained by the follower. First, the leader-following formation is transformed into special trajectory tracking. And then, a neural network (NN finite-time observer of the follower robot is designed to estimate the dynamics of the leader robot. Finally, finite-time formation control laws are developed for the follower robot to track the leader robot in the desired separation and bearing in finite time. The effectiveness of the proposed NN finite-time observer and the formation control laws are illustrated by both qualitative analysis and simulation results.

  9. Path Planning and Navigation for Mobile Robots in a Hybrid Sensor Network without Prior Location Information

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-03-01

    Full Text Available In a hybrid wireless sensor network with mobile and static nodes, which have no prior geographical knowledge, successful navigation for mobile robots is one of the main challenges. In this paper, we propose two novel navigation algorithms for outdoor environments, which permit robots to travel from one static node to another along a planned path in the sensor field, namely the RAC and the IMAP algorithms. Using this, the robot can navigate without the help of a map, GPS or extra sensor modules, only using the received signal strength indication (RSSI and odometry. Therefore, our algorithms have the advantage of being cost-effective. In addition, a path planning algorithm to schedule mobile robots' travelling paths is presented, which focuses on shorter distances and robust paths for robots by considering the RSSI-Distance characteristics. The simulations and experiments conducted with an autonomous mobile robot show the effectiveness of the proposed algorithms in an outdoor environment.

  10. Robotics

    International Nuclear Information System (INIS)

    Scheide, A.W.

    1983-01-01

    This article reviews some of the technical areas and history associated with robotics, provides information relative to the formation of a Robotics Industry Committee within the Industry Applications Society (IAS), and describes how all activities relating to robotics will be coordinated within the IEEE. Industrial robots are being used for material handling, processes such as coating and arc welding, and some mechanical and electronics assembly. An industrial robot is defined as a programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for a variety of tasks. The initial focus of the Robotics Industry Committee will be on the application of robotics systems to the various industries that are represented within the IAS

  11. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    Science.gov (United States)

    Sanders, Adam M. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor); Strawser, Philip A. (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  12. PhenoCam Dataset v1.0: Digital Camera Imagery from the PhenoCam Network, 2000-2015

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset provides a time series of visible-wavelength digital camera imagery collected through the PhenoCam Network at each of 133 sites in North America and...

  13. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  14. Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates

    International Nuclear Information System (INIS)

    Mo, Se Hyun; Jeon, Young Pil; Park, Jong Ho; Chong, Kil To

    2017-01-01

    With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

  15. Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Mo, Se Hyun [Amotech, Seoul (Korea, Republic of); Jeon, Young Pil [Samsung Electronics Co., Ltd. Suwon (Korea, Republic of); Park, Jong Ho [Seonam Univ., Namwon (Korea, Republic of); Chong, Kil To [Chon-buk Nat' 1 Univ., Junju (Korea, Republic of)

    2017-07-15

    With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

  16. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  17. Robotics

    Energy Technology Data Exchange (ETDEWEB)

    Lorino, P; Altwegg, J M

    1985-05-01

    This article, which is aimed at the general reader, examines latest developments in, and the role of, modern robotics. The 7 main sections are sub-divided into 27 papers presented by 30 authors. The sections are as follows: 1) The role of robotics, 2) Robotics in the business world and what it can offer, 3) Study and development, 4) Utilisation, 5) Wages, 6) Conditions for success, and 7) Technological dynamics.

  18. Decentralized Control of Unmanned Aerial Robots for Wireless Airborne Communication Networks

    Directory of Open Access Journals (Sweden)

    Deok-Jin Lee

    2010-09-01

    Full Text Available This paper presents a cooperative control strategy for a team of aerial robotic vehicles to establish wireless airborne communication networks between distributed heterogeneous vehicles. Each aerial robot serves as a flying mobile sensor performing a reconfigurable communication relay node which enabls communication networks with static or slow-moving nodes on gorund or ocean. For distributed optimal deployment of the aerial vehicles for communication networks, an adaptive hill-climbing type decentralized control algorithm is developed to seek out local extremum for optimal localization of the vehicles. The sensor networks estabilished by the decentralized cooperative control approach can adopt its configuraiton in response to signal strength as the function of the relative distance between the autonomous aerial robots and distributed sensor nodes in the sensed environment. Simulation studies are conducted to evaluate the effectiveness of the proposed decentralized cooperative control technique for robust communication networks.

  19. Distributed, Collaborative Human-Robotic Networks for Outdoor Experiments in Search, Identify and Track

    Science.gov (United States)

    2011-01-11

    and its variance σ2Ûi are determined. Ûi = ûi + Pu,EN (PEN )−1 [( Ejc Njc ) − ( êi n̂i )] (15) σ2 Ûi = Pui − P u,EN i ( PENi )−1 PEN,ui (16) where...screen; the operator can click a robot’s camera view to select it as the Focus Robot. The Focus Robot’s camera stream is enlarged and displayed in the

  20. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242).

    Science.gov (United States)

    Almusawi, Ahmed R J; Dülger, L Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.

  1. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)

    Science.gov (United States)

    Dülger, L. Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles. PMID:27610129

  2. Decentralized coverage control problems for mobile robotic sensor and actuator networks

    CERN Document Server

    Savkin, A; Xi, Z; Javed, F; Matveev, A; Nguyen, H

    2015-01-01

    This book introduces various coverage control problems for mobile sensor networks including barrier, sweep and blanket. Unlike many existing algorithms, all of the robotic sensor and actuator motion algorithms developed in the book are fully decentralized or distributed, computationally efficient, easily implementable in engineering practice and based only on information on the closest neighbours of each mobile sensor and actuator and local information about the environment. Moreover, the mobile robotic sensors have no prior information about the environment in which they operation. These various types of coverage problems have never been covered before by a single book in a systematic way. Another topic of this book is the study of mobile robotic sensor and actuator networks. Many modern engineering applications include the use of sensor and actuator networks to provide efficient and effective monitoring and control of industrial and environmental processes. Such mobile sensor and actuator networks are abl...

  3. FRAMEWORK FOR AD HOC NETWORK COMMUNICATION IN MULTI-ROBOT SYSTEMS

    Directory of Open Access Journals (Sweden)

    Khilda Slyusar

    2016-11-01

    Full Text Available Assume a team of mobile robots operating in environments where no communication infrastructure like routers or access points is available. The robots have to create a mobile ad hoc network, in that case, it provides communication on peer-to-peer basis. The paper gives an overview of existing solutions how to route messages in such ad hoc networks between robots that are not directly connected and introduces a design of a software framework for realization of such communication. Feasibility of the proposed framework is shown on the example of distributed multi-robot exploration of an a priori unknown environment. Testing of developed functionality in an exploration scenario is based on results of several experiments with various input conditions of the exploration process and various sizes of a team and is described herein.

  4. Adaptive training of neural networks for control of autonomous mobile robots

    NARCIS (Netherlands)

    Steur, E.; Vromen, T.; Nijmeijer, H.; Fossen, T.I.; Nijmeijer, H.; Pettersen, K.Y.

    2017-01-01

    We present an adaptive training procedure for a spiking neural network, which is used for control of a mobile robot. Because of manufacturing tolerances, any hardware implementation of a spiking neural network has non-identical nodes, which limit the performance of the controller. The adaptive

  5. Automatic approach to stabilization and control for multi robot teams by multilayer network operator

    Directory of Open Access Journals (Sweden)

    Diveev Askhat

    2016-01-01

    Full Text Available The paper describes a novel methodology for synthesis a high-level control of autonomous multi robot teams. The approach is based on multilayer network operator method that belongs to a symbolic regression class. Synthesis is accomplished in three steps: stabilizing robots about some given position in a state space, finding optimal trajectories of robots’ motion as sets of stabilizing points and then approximating all the points of optimal trajectories by some multi-dimensional function of state variables. The feasibility and effectiveness of the proposed approach is verified on simulations of the task of control synthesis for three mobile robots parking in the constrained space.

  6. Adaptive robotic control driven by a versatile spiking cerebellar network.

    Directory of Open Access Journals (Sweden)

    Claudia Casellato

    Full Text Available The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning, a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.

  7. Adaptive robotic control driven by a versatile spiking cerebellar network.

    Science.gov (United States)

    Casellato, Claudia; Antonietti, Alberto; Garrido, Jesus A; Carrillo, Richard R; Luque, Niceto R; Ros, Eduardo; Pedrocchi, Alessandra; D'Angelo, Egidio

    2014-01-01

    The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.

  8. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  9. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  10. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242

    Directory of Open Access Journals (Sweden)

    Ahmed R. J. Almusawi

    2016-01-01

    Full Text Available This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot’s joint angles.

  11. RoCoMAR: Robots' Controllable Mobility Aided Routing and Relay Architecture for Mobile Sensor Networks

    Science.gov (United States)

    Van Le, Duc; Oh, Hoon; Yoon, Seokhoon

    2013-01-01

    In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay. PMID:23881134

  12. Towards the Robotic “Avatar”: An Extensive Survey of the Cooperation between and within Networked Mobile Sensors

    Directory of Open Access Journals (Sweden)

    Aydan M. Erkmen

    2010-09-01

    Full Text Available Cooperation between networked mobile sensors, wearable and sycophant sensor networks with parasitically sticking agents, and also having human beings involved in the loop is the “Avatarization” within the robotic research community, where all networks are connected and where you can connect/disconnect at any time to acquire data from a vast unstructured world. This paper extensively surveys the networked robotic foundations of this robotic biological “Avatar” that awaits us in the future. Cooperation between networked mobile sensors as well as cooperation of nodes within a network are becoming more robust, fault tolerant and enable adaptation of the networks to changing environment conditions. In this paper, we survey and comparatively discuss the current state of networked robotics via their critical application areas and their design characteristics. We conclude by discussing future challenges.

  13. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  14. Dynamic Mobile RobotNavigation Using Potential Field Based Immune Network

    Directory of Open Access Journals (Sweden)

    Guan-Chun Luh

    2007-04-01

    Full Text Available This paper proposes a potential filed immune network (PFIN for dynamic navigation of mobile robots in an unknown environment with moving obstacles and fixed/moving targets. The Velocity Obstacle method is utilized to determine imminent obstacle collision of a robot moving in the time-varying environment. The response of the overall immune network is derived by the aid of fuzzy system. Simulation results are presented to verify the effectiveness of the proposed methodology in unknown environments with single and multiple moving obstacles

  15. A neural network-based exploratory learning and motor planning system for co-robots

    Directory of Open Access Journals (Sweden)

    Byron V Galbraith

    2015-07-01

    Full Text Available Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or learning by doing, an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  16. A neural network-based exploratory learning and motor planning system for co-robots.

    Science.gov (United States)

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  17. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    Science.gov (United States)

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  18. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng-Fei Wu

    2017-06-01

    Full Text Available Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI. To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  19. Supervisory Adaptive Network-Based Fuzzy Inference System (SANFIS Design for Empirical Test of Mobile Robot

    Directory of Open Access Journals (Sweden)

    Yi-Jen Mon

    2012-10-01

    Full Text Available A supervisory Adaptive Network-based Fuzzy Inference System (SANFIS is proposed for the empirical control of a mobile robot. This controller includes an ANFIS controller and a supervisory controller. The ANFIS controller is off-line tuned by an adaptive fuzzy inference system, the supervisory controller is designed to compensate for the approximation error between the ANFIS controller and the ideal controller, and drive the trajectory of the system onto a specified surface (called the sliding surface or switching surface while maintaining the trajectory onto this switching surface continuously to guarantee the system stability. This SANFIS controller can achieve favourable empirical control performance of the mobile robot in the empirical tests of driving the mobile robot with a square path. Practical experimental results demonstrate that the proposed SANFIS can achieve better control performance than that achieved using an ANFIS controller for empirical control of the mobile robot.

  20. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  1. Interpreting canopy development and physiology using a European phenology camera network at flux sites

    DEFF Research Database (Denmark)

    Wingate, L.; Ogeé, J.; Cremonese, E.

    2015-01-01

    ). We also investigated whether the seasonal patterns of red, green and blue colour fractions derived from digital images could be modelled mechanistically using the PROSAIL model parameterised with information of seasonal changes in canopy leaf area and leaf chlorophyll and carotenoid concentrations...... cameras installed on towers across Europe above deciduous and evergreen forests, grasslands and croplands, where vegetation and atmosphere CO2 fluxes are measured continuously. Using colour indices from digital images and using piecewise regression analysis of time series, we explored whether key changes...... in canopy phenology could be detected automatically across different land use types in the network. The piecewise regression approach could capture the start and end of the growing season, in addition to identifying striking changes in colour signals caused by flowering and management practices...

  2. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    Directory of Open Access Journals (Sweden)

    Antonio Sánchez-Esguevillas

    2012-08-01

    Full Text Available This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  3. Fused Smart Sensor Network for Multi-Axis Forward Kinematics Estimation in Industrial Robots

    Directory of Open Access Journals (Sweden)

    Rene de Jesus Romero-Troncoso

    2011-04-01

    Full Text Available Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of high-computational load, which avoids the online measurement of the joint’s angular position and the online forward kinematics estimation. The contribution of this work is to propose a fused smart sensor network to estimate the forward kinematics of an industrial robot. The developed smart processor uses Kalman filters to filter and to fuse the information of the sensor network. Two primary sensors are used: an optical encoder, and a 3-axis accelerometer. In order to obtain the position and orientation of each joint online a field-programmable gate array (FPGA is used in the hardware implementation taking advantage of the parallel computation capabilities and reconfigurability of this device. With the aim of evaluating the smart sensor network performance, three real-operation-oriented paths are executed and monitored in a 6-degree of freedom robot.

  4. Fused smart sensor network for multi-axis forward kinematics estimation in industrial robots.

    Science.gov (United States)

    Rodriguez-Donate, Carlos; Osornio-Rios, Roque Alfredo; Rivera-Guillen, Jesus Rooney; Romero-Troncoso, Rene de Jesus

    2011-01-01

    Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of high-computational load, which avoids the online measurement of the joint's angular position and the online forward kinematics estimation. The contribution of this work is to propose a fused smart sensor network to estimate the forward kinematics of an industrial robot. The developed smart processor uses Kalman filters to filter and to fuse the information of the sensor network. Two primary sensors are used: an optical encoder, and a 3-axis accelerometer. In order to obtain the position and orientation of each joint online a field-programmable gate array (FPGA) is used in the hardware implementation taking advantage of the parallel computation capabilities and reconfigurability of this device. With the aim of evaluating the smart sensor network performance, three real-operation-oriented paths are executed and monitored in a 6-degree of freedom robot.

  5. Robotics

    Indian Academy of Sciences (India)

    netic induction to detect an object. The development of ... end effector, inclination of object, magnetic and electric fields, etc. The sensors described ... In the case of a robot, the various actuators and motors have to be modelled. The major ...

  6. Hybrid Control of Long-Endurance Aerial Robotic Vehicles for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Deok-Jin Lee

    2011-06-01

    Full Text Available This paper presents an effective hybrid control approach for building stable wireless sensor networks between heterogeneous unmanned vehicles using long‐ endurance aerial robotic vehicles. For optimal deployment of the aerial vehicles in communication networks, a gradient climbing based self‐estimating control algorithm is utilized to locate the aerial platforms to maintain maximum communication throughputs between distributed multiple nodes. The autonomous aerial robots, which function as communication relay nodes, extract and harvest thermal energy from the atmospheric environment to improve their flight endurance within specified communication coverage areas. The rapidly‐deployable sensor networks with the high‐endurance aerial vehicles can be used for various application areas including environment monitoring, surveillance, tracking, and decision‐making support. Flight test and simulation studies are conducted to evaluate the effectiveness of the proposed hybrid control technique for robust communication networks.

  7. Precise Localization and Formation Control of Swarm Robots via Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Han Wu

    2014-01-01

    Full Text Available Precise localization and formation control are one of the key technologies to achieve coordination and control of swarm robots, which is also currently a bottleneck for practical applications of swarm robotic systems. Aiming at overcoming the limited individual perception and the difficulty of achieving precise localization and formation, a localization approach combining dead reckoning (DR with wireless sensor network- (WSN- based methods is proposed in this paper. Two kinds of WSN localization technologies are adopted in this paper, that is, ZigBee-based RSSI (received signal strength indication global localization and electronic tag floors for calibration of local positioning. First, the DR localization information is combined with the ZigBee-based RSSI position information using the Kalman filter method to achieve precise global localization and maintain the robot formation. Then the electronic tag floors provide the robots with their precise coordinates in some local areas and enable the robot swarm to calibrate its formation by reducing the accumulated position errors. Hence, the overall performance of localization and formation control of the swarm robotic system is improved. Both of the simulation results and the experimental results on a real schematic system are given to demonstrate the success of the proposed approach.

  8. A fast position estimation method for a control rod guide tube inspection robot with a single camera

    International Nuclear Information System (INIS)

    Lee, Jae C.; Seop, Jun H.; Choi, Yu R.; Kim, Jae H.

    2004-01-01

    One of the problems in the inspection of control rod guide tubes using a mobile robot is accurate estimation of the robot's position. The problem is usually explained by the question 'Where am I?'. We can solve this question by a method called dead reckoning using odometers. But it has some inherent drawbacks such that the position error grows without bound unless an independent reference is used periodically to reduce the errors. In this paper, we presented one method to overcome this drawback by using a vision sensor. Our method is based on the classical Lucas Kanade algorithm for on image tracking. In this algorithm, an optical flow must be calculated at every image frame, thus it has intensive computing load. In order to handle large motions, it is preferable to use a large integration window. But a small integration window is more preferable to keep the details contained in the images. We used the robot's movement information obtained from the dead reckoning as an input parameter for the feature tracking algorithm in order to restrict the position of an integration window. By means of this method, we could reduce the size of an integration window without any loss of its ability to handle large motions and could avoid the trade off in the accuracy. And we could estimate the position of our robot relatively fast without on intensive computing time and the inherent drawbacks mentioned above. We studied this algorithm for applying it to the control rod guide tubes inspection robot and tried an inspection without on operator's intervention

  9. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  10. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  11. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  12. Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

    Science.gov (United States)

    Meng, Qinggang; Lee, M. H.

    2007-03-01

    Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.

  13. Polish and European SST Assets: the Solaris-Panoptes Global Network of Robotic Telescopes and the Borowiec Satellite Laser Ranging System

    Science.gov (United States)

    Konacki, M.; Lejba, P.; Sybilski, P.; Pawłaszek, R.; Kozłowski, S.; Suchodolski, T.; Litwicki, M.; Kolb, U.; Burwitz, V.; Baader, J.; Groot, P.; Bloemen, S.; Ratajczak, M.; Helminiak, K.; Borek, R.; Chodosiewicz, P.

    2016-09-01

    We present the assets of the Nicolaus Copernicus Astronomical Center, Space Research Center (both of the Polish Academy of Sciences), two Polish companies Sybilla Technologies, Cillium Engineering and a non-profit research foundation Baltic Institute of Technology. These assets are enhanced by telescopes belonging to The Open University (UK), the Max Planck Institute for Extraterrestrial Physics and in the future the Radboud University. They consist of the Solaris-Panoptes global network of optical robotic telescopes and the satellite laser ranging station in Borowiec, Poland. These assets will contribute to the Polish and European Space Surveillance and Tracking (SST) program. The Solaris component is composed of four autonomous observatories in the Southern Hemisphere. Solaris nodes are located at the South African Astronomical Observatory (Solaris-1 and Solaris-2), Siding Spring Observatory, Australia (Solaris-3) and Complejo Astronomico El Leoncito, Argentina (Solaris-4). They are equipped with 0.5-m telescopes on ASA DDM-160 direct drive mounts, Andor iKon-L cameras and housed in 3.5-m Baader Planetarium (BP) clamshell domes. The Panoptes component is a network of telescopes operated by software from Sybilla Technologies. It currently consists of 4 telescopes at three locations, all on GM4000 mounts. One 0.36-m (Panoptes-COAST, STL- 1001E camera, 3.5 BP clamshell dome) and one 0.43-m (Panoptes-PIRATE, FLI 16803 camera, 4.5-m BP clamshell dome, with planned exchange to 0.63-m) telescope are located at the Teide Observatory (Tenerfie, Canary Islands), one 0.6-m (Panoptes-COG, SBIG STX 16803 camera, 4.5-m BP clamshell dome) telescope in Garching, Germany and one 0.5-m (Panoptes-MAM, FLI 16803 camera, 4.5-m BP slit dome) in Mammendorf, Germany. Panoptes-COAST and Panoptes-PIRATE are owned by The Open University (UK). Panoptes-COG is owned by the Max Planck Institute

  14. The use of time-of-flight camera for navigating robots in computer-aided surgery: monitoring the soft tissue envelope of minimally invasive hip approach in a cadaver study.

    Science.gov (United States)

    Putzer, David; Klug, Sebastian; Moctezuma, Jose Luis; Nogler, Michael

    2014-12-01

    Time-of-flight (TOF) cameras can guide surgical robots or provide soft tissue information for augmented reality in the medical field. In this study, a method to automatically track the soft tissue envelope of a minimally invasive hip approach in a cadaver study is described. An algorithm for the TOF camera was developed and 30 measurements on 8 surgical situs (direct anterior approach) were carried out. The results were compared to a manual measurement of the soft tissue envelope. The TOF camera showed an overall recognition rate of the soft tissue envelope of 75%. On comparing the results from the algorithm with the manual measurements, a significant difference was found (P > .005). In this preliminary study, we have presented a method for automatically recognizing the soft tissue envelope of the surgical field in a real-time application. Further improvements could result in a robotic navigation device for minimally invasive hip surgery. © The Author(s) 2014.

  15. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  16. 75 FR 36456 - Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision...

    Science.gov (United States)

    2010-06-25

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc.), Security... accurate information concerning the securities of Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc...

  17. Neural Network Control for the Linear Motion of a Spherical Mobile Robot

    Directory of Open Access Journals (Sweden)

    Yao Cai

    2011-09-01

    Full Text Available This paper discussed the stabilization and position tracking control of the linear motion of an underactuated spherical robot. By considering the actuator dynamics, a complete dynamic model of the robot is deduced, which is a complex third order, two variables nonlinear differential system and those two variables have strong coupling due to the mechanical structure of the robot. Different from traditional treatments, no linearization is applied to this system but a single‐input multiple‐output PID (SIMO_PID controller is designed by adopting a six‐input single‐ output CMAC_GBF (Cerebellar Model Articulation Controller with General Basis Function neural network to compensate the actuator nonlinearity and the credit assignment (CA learning method to obtain faster convergence of CMAC_GBF. The proposed controller is generalizable to other single‐input multiple‐output system with good real‐time capability. Simulations in Matlab are used to validate the control effects.

  18. Networked Control System for the Guidance of a Four-Wheel Steering Agricultural Robotic Platform

    Directory of Open Access Journals (Sweden)

    Eduardo Paciência Godoy

    2012-01-01

    Full Text Available A current trend in the agricultural area is the development of mobile robots and autonomous vehicles for precision agriculture (PA. One of the major challenges in the design of these robots is the development of the electronic architecture for the control of the devices. In a joint project among research institutions and a private company in Brazil a multifunctional robotic platform for information acquisition in PA is being designed. This platform has as main characteristics four-wheel propulsion and independent steering, adjustable width, span of 1,80 m in height, diesel engine, hydraulic system, and a CAN-based networked control system (NCS. This paper presents a NCS solution for the platform guidance by the four-wheel hydraulic steering distributed control. The control strategy, centered on the robot manipulators control theory, is based on the difference between the desired and actual position and considering the angular speed of the wheels. The results demonstrate that the NCS was simple and efficient, providing suitable steering performance for the platform guidance. Even though the simplicity of the NCS solution developed, it also overcame some verified control challenges in the robot guidance system design such as the hydraulic system delay, nonlinearities in the steering actuators, and inertia in the steering system due the friction of different terrains.

  19. Robotic movement preferentially engages the action observation network

    NARCIS (Netherlands)

    Cross, E.S.; Liepelt, R.; Hamilton, A.F.D.C.; Parkinson, J.; Ramsey, R.; Stadler, W.; Prinz, W.G.

    2012-01-01

    As humans, we gather a wide range of information about other people from watching them move. A network of parietal, premotor, and occipitotemporal regions within the human brain, termed the action observation network (AON), has been implicated in understanding others' actions by means of an

  20. Robotics and remote systems applications

    International Nuclear Information System (INIS)

    Rabold, D.E.

    1996-01-01

    This article is a review of numerous remote inspection techniques in use at the Savannah River (and other) facilities. These include: (1) reactor tank inspection robot, (2) californium waste removal robot, (3) fuel rod lubrication robot, (4) cesium source manipulation robot, (5) tank 13 survey and decontamination robots, (6) hot gang valve corridor decontamination and junction box removal robots, (7) lead removal from deionizer vessels robot, (8) HB line cleanup robot, (9) remote operation of a front end loader at WIPP, (10) remote overhead video extendible robot, (11) semi-intelligent mobile observing navigator, (12) remote camera systems in the SRS canyons, (13) cameras and borescope for the DWPF, (14) Hanford waste tank camera system, (15) in-tank precipitation camera system, (16) F-area retention basin pipe crawler, (17) waste tank wall crawler and annulus camera, (18) duct inspection, and (19) deionizer resin sampling

  1. Design of Optimal Hybrid Position/Force Controller for a Robot Manipulator Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Vikas Panwar

    2007-01-01

    Full Text Available The application of quadratic optimization and sliding-mode approach is considered for hybrid position and force control of a robot manipulator. The dynamic model of the manipulator is transformed into a state-space model to contain two sets of state variables, where one describes the constrained motion and the other describes the unconstrained motion. The optimal feedback control law is derived solving matrix differential Riccati equation, which is obtained using Hamilton Jacobi Bellman optimization. The optimal feedback control law is shown to be globally exponentially stable using Lyapunov function approach. The dynamic model uncertainties are compensated with a feedforward neural network. The neural network requires no preliminary offline training and is trained with online weight tuning algorithms that guarantee small errors and bounded control signals. The application of the derived control law is demonstrated through simulation with a 4-DOF robot manipulator to track an elliptical planar constrained surface while applying the desired force on the surface.

  2. Self-generation of controller of an underwater robot with neural network

    International Nuclear Information System (INIS)

    Suto, T.; Ura, T.

    1994-01-01

    A self-organizing controller system is constructed based on artificial neural networks and applied to constant altitude swimming of the autonomous underwater robot PTEROA 150. The system consists of a controller and a forward model which calculates the values for evaluation as a result of control. Some methods are introduced for quick and appropriate adjustment of the controller network. Modification of the controller network is executed based on error-back-propagation method utilizing the forward model network. The forward model is divided into three sub-networks which represent dynamics of the vehicle, estimation of relative position to the seabed and calculation of the altitude. The proposed adaptive system is demonstrated in computer simulations where objective of a vehicle is keeping a constant altitude from seabed which is constituted of triangular ridges

  3. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  4. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  5. User-assisted visual search and tracking across distributed multi-camera networks

    Science.gov (United States)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  6. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    Directory of Open Access Journals (Sweden)

    Eduard eGrinke

    2015-10-01

    Full Text Available Walking animals, like insects, with little neural computing can effectively perform complex behaviors. They can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a walking robot is a challenging task. In this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors in the network to generate different turning angles with short-term memory for a biomechanical walking robot. The turning information is transmitted as descending steering signals to the locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations as well as escaping from sharp corners or deadlocks. Using backbone joint control embedded in the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments.

  7. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    Science.gov (United States)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles

  8. An Artificial Neural Network Modeling for Force Control System of a Robotic Pruning Machine

    Directory of Open Access Journals (Sweden)

    Ali Hashemi

    2014-06-01

    Full Text Available Nowadays, there has been an increasing application of pruning robots for planted forests due to the growing concern on the efficiency and safety issues. Power consumption and working time of agricultural machines have become important issues due to the high value of energy in modern world. In this study, different multi-layer back-propagation networks were utilized for mapping the complex and highly interactive of pruning process parameters and to predict power consumption and cutting time of a force control equipped robotic pruning machine by knowing input parameters such as: rotation speed, stalk diameter, and sensitivity coefficient. Results showed significant effects of all input parameters on output parameters except rotational speed on cutting time. Therefore, for reducing the wear of cutting system, a less rotational speed in every sensitivity coefficient should be selected.

  9. Aperiodic linear networked control considering variable channel delays: application to robots coordination.

    Science.gov (United States)

    Santos, Carlos; Espinosa, Felipe; Santiso, Enrique; Mazo, Manuel

    2015-05-27

    One of the main challenges in wireless cyber-physical systems is to reduce the load of the communication channel while preserving the control performance. In this way, communication resources are liberated for other applications sharing the channel bandwidth. The main contribution of this work is the design of a remote control solution based on an aperiodic and adaptive triggering mechanism considering the current network delay of multiple robotics units. Working with the actual network delay instead of the maximum one leads to abandoning this conservative assumption, since the triggering condition is fixed depending on the current state of the network. This way, the controller manages the usage of the wireless channel in order to reduce the channel delay and to improve the availability of the communication resources. The communication standard under study is the widespread IEEE 802.11g, whose channel delay is clearly uncertain. First, the adaptive self-triggered control is validated through the TrueTime simulation tool configured for the mentioned WiFi standard. Implementation results applying the aperiodic linear control laws on four P3-DX robots are also included. Both of them demonstrate the advantage of this solution in terms of network accessing and control performance with respect to periodic and non-adaptive self-triggered alternatives.

  10. Aperiodic Linear Networked Control Considering Variable Channel Delays: Application to Robots Coordination

    Directory of Open Access Journals (Sweden)

    Carlos Santos

    2015-05-01

    Full Text Available One of the main challenges in wireless cyber-physical systems is to reduce the load of the communication channel while preserving the control performance. In this way, communication resources are liberated for other applications sharing the channel bandwidth. The main contribution of this work is the design of a remote control solution based on an aperiodic and adaptive triggering mechanism considering the current network delay of multiple robotics units. Working with the actual network delay instead of the maximum one leads to abandoning this conservative assumption, since the triggering condition is fixed depending on the current state of the network. This way, the controller manages the usage of the wireless channel in order to reduce the channel delay and to improve the availability of the communication resources. The communication standard under study is the widespread IEEE 802.11g, whose channel delay is clearly uncertain. First, the adaptive self-triggered control is validated through the TrueTime simulation tool configured for the mentioned WiFi standard. Implementation results applying the aperiodic linear control laws on four P3-DX robots are also included. Both of them demonstrate the advantage of this solution in terms of network accessing and control performance with respect to periodic and non-adaptive self-triggered alternatives.

  11. Parametric motion control of robotic arms: A biologically based approach using neural networks

    Science.gov (United States)

    Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.

    1993-01-01

    A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.

  12. Creating Communications, Computing, and Networking Technology Development Road Maps for Future NASA Human and Robotic Missions

    Science.gov (United States)

    Bhasin, Kul; Hayden, Jeffrey L.

    2005-01-01

    For human and robotic exploration missions in the Vision for Exploration, roadmaps are needed for capability development and investments based on advanced technology developments. A roadmap development process was undertaken for the needed communications, and networking capabilities and technologies for the future human and robotics missions. The underlying processes are derived from work carried out during development of the future space communications architecture, an d NASA's Space Architect Office (SAO) defined formats and structures for accumulating data. Interrelationships were established among emerging requirements, the capability analysis and technology status, and performance data. After developing an architectural communications and networking framework structured around the assumed needs for human and robotic exploration, in the vicinity of Earth, Moon, along the path to Mars, and in the vicinity of Mars, information was gathered from expert participants. This information was used to identify the capabilities expected from the new infrastructure and the technological gaps in the way of obtaining them. We define realistic, long-term space communication architectures based on emerging needs and translate the needs into interfaces, functions, and computer processing that will be required. In developing our roadmapping process, we defined requirements for achieving end-to-end activities that will be carried out by future NASA human and robotic missions. This paper describes: 10 the architectural framework developed for analysis; 2) our approach to gathering and analyzing data from NASA, industry, and academia; 3) an outline of the technology research to be done, including milestones for technology research and demonstrations with timelines; and 4) the technology roadmaps themselves.

  13. A Proposal and Evaluation of Security Camera System at a Car Park in an Ad-Hoc Network

    Science.gov (United States)

    Uemura, Wataru; Murata, Masashi

    In recent year, ad-hoc network technology has gained attention, which consists of not access points and base stations but of wireless nodes. In this network, it is difficult to maintain the whole data flow because of the absence of access points as the network administrator when nodes share the data. This paper proposes the security camera system which has only nodes sharing the taken pictures and has the robustness against the data destroying. The sender node cannot know whether packets are received or not by neighboring nodes in broadcasting because of a unidirectional communication. So in our proposed method, the sender node selects the receiver node from neighboring nodes, and they communicate with each other. On the other hand, neighboring nodes listen to packets between the sender node and the receiver node. After that, this method guarantees nodes of more than 1 which receive a data in broadcasting. We construct the security camera system using wireless nodes with the IEEE 802.15.4 specification and show the performance for security. At last, using the simulator we show the efficiency in the large environment, and conclude this paper.

  14. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    DEFF Research Database (Denmark)

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin

    2015-01-01

    correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking...... dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural...... mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online...

  15. A dense camera network for cropland (CropInsight) - developing high spatiotemporal resolution crop Leaf Area Index (LAI) maps through network images and novel satellite data

    Science.gov (United States)

    Kimm, H.; Guan, K.; Luo, Y.; Peng, J.; Mascaro, J.; Peng, B.

    2017-12-01

    Monitoring crop growth conditions is of primary interest to crop yield forecasting, food production assessment, and risk management of individual farmers and agribusiness. Despite its importance, there are limited access to field level crop growth/condition information in the public domain. This scarcity of ground truth data also hampers the use of satellite remote sensing for crop monitoring due to the lack of validation. Here, we introduce a new camera network (CropInsight) to monitor crop phenology, growth, and conditions that are designed for the US Corn Belt landscape. Specifically, this network currently includes 40 sites (20 corn and 20 soybean fields) across southern half of the Champaign County, IL ( 800 km2). Its wide distribution and automatic operation enable the network to capture spatiotemporal variations of crop growth condition continuously at the regional scale. At each site, low-maintenance, and high-resolution RGB digital cameras are set up having a downward view from 4.5 m height to take continuous images. In this study, we will use these images and novel satellite data to construct daily LAI map of the Champaign County at 30 m spatial resolution. First, we will estimate LAI from the camera images and evaluate it using the LAI data collected from LAI-2200 (LI-COR, Lincoln, NE). Second, we will develop relationships between the camera-based LAI estimation and vegetation indices derived from a newly developed MODIS-Landsat fusion product (daily, 30 m resolution, RGB + NIR + SWIR bands) and the Planet Lab's high-resolution satellite data (daily, 5 meter, RGB). Finally, we will scale up the above relationships to generate high spatiotemporal resolution crop LAI map for the whole Champaign County. The proposed work has potentials to expand to other agro-ecosystems and to the broader US Corn Belt.

  16. Experiments in Neural-Network Control of a Free-Flying Space Robot

    Science.gov (United States)

    Wilson, Edward

    1995-01-01

    Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype. The first issue concerns the importance of true system level design of the control system. A new hybrid strategy is developed here, in depth, for the beneficial integration of neural networks into the total control system. A second important issue in neural network control concerns incorporating a priori knowledge into the neural network. In many applications, it is possible to get a reasonably accurate controller using conventional means. If this prior information is used purposefully to provide a starting point for the optimizing capabilities of the neural network, it can provide much faster initial learning. In a step towards addressing this issue, a new generic Fully Connected Architecture (FCA) is developed for use with backpropagation. A third issue is that neural networks are commonly trained using a gradient based optimization method such as backpropagation; but many real world systems have Discrete Valued Functions (DVFs) that do not permit gradient based optimization. One example is the on-off thrusters that are common on spacecraft. A new technique is developed here that now extends backpropagation learning for use with DVFs. The fourth issue is that the speed of adaptation is often a limiting factor in the implementation of a neural network control system. This issue has been strongly resolved in the research by drawing on the above new contributions.

  17. Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.

    Science.gov (United States)

    Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A

    2017-07-01

    Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.

  18. Intelligent control of robotic arm/hand systems for the NASA EVA retriever using neural networks

    Science.gov (United States)

    Mclauchlan, Robert A.

    1989-01-01

    Adaptive/general learning algorithms using varying neural network models are considered for the intelligent control of robotic arm plus dextrous hand/manipulator systems. Results are summarized and discussed for the use of the Barto/Sutton/Anderson neuronlike, unsupervised learning controller as applied to the stabilization of an inverted pendulum on a cart system. Recommendations are made for the application of the controller and a kinematic analysis for trajectory planning to simple object retrieval (chase/approach and capture/grasp) scenarios in two dimensions.

  19. Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.

    Science.gov (United States)

    Herrero, David; Martínez, Humberto

    2011-01-01

    This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.

  20. Closed loop interactions between spiking neural network and robotic simulators based on MUSIC and ROS

    Directory of Open Access Journals (Sweden)

    Philipp Weidel

    2016-08-01

    Full Text Available In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS to the Multi-Simulator Coordinator (MUSIC. This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning.

  1. Learning Spatial Object Localization from Vision on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jürgen Leitner

    2012-12-01

    Full Text Available We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN and Genetic Programming (GP, are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

  2. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  3. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  4. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots.

    Science.gov (United States)

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-10-30

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  5. Real-time networked control of an industrial robot manipulator via discrete-time second-order sliding modes

    Science.gov (United States)

    Massimiliano Capisani, Luca; Facchinetti, Tullio; Ferrara, Antonella

    2010-08-01

    This article presents the networked control of a robotic anthropomorphic manipulator based on a second-order sliding mode technique, where the control objective is to track a desired trajectory for the manipulator. The adopted control scheme allows an easy and effective distribution of the control algorithm over two networked machines. While the predictability of real-time tasks execution is achieved by the Soft Hard Real-Time Kernel (S.Ha.R.K.) real-time operating system, the communication is established via a standard Ethernet network. The performances of the control system are evaluated under different experimental system configurations using, to perform the experiments, a COMAU SMART3-S2 industrial robot, and the results are analysed to put into evidence the robustness of the proposed approach against possible network delays, packet losses and unmodelled effects.

  6. WE-DE-BRA-11: A Study of Motion Tracking Accuracy of Robotic Radiosurgery Using a Novel CCD Camera Based End-To-End Test System

    Energy Technology Data Exchange (ETDEWEB)

    Wang, L; M Yang, Y [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); Nelson, B [Logos Systems Intl, Scotts Valley, CA (United States)

    2016-06-15

    Purpose: A novel end-to-end test system using a CCD camera and a scintillator based phantom (XRV-124, Logos Systems Int’l) capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery (CyberKnife) was developed and reported in our previous work. This work investigates its application in assessing the motion tracking (Synchrony) accuracy for CyberKnife. Methods: A QA plan with Anterior and Lateral beams (with 4 different collimator sizes) was created (Multiplan v5.3) for the XRV-124 phantom. The phantom was placed on a motion platform (superior and inferior movement), and the plans were delivered on the CyberKnife M6 system using four motion patterns: static, Sine- wave, Sine with 15° phase shift, and a patient breathing pattern composed of 2cm maximum motion with 4 second breathing cycle. Under integral recording mode, the time-averaged beam vectors (X, Y, Z) were measured by the phantom and compared with static delivery. In dynamic recording mode, the beam spots were recorded at a rate of 10 frames/second. The beam vector deviation from average position was evaluated against the various breathing patterns. Results: The average beam position of the six deliveries with no motion and three deliveries with Synchrony tracking on ideal motion (sinewave without phase shift) all agree within −0.03±0.00 mm, 0.10±0.04, and 0.04±0.03 in the X, Y, and X directions. Radiation beam width (FWHM) variations are within ±0.03 mm. Dynamic video record showed submillimeter tracking stability for both regular and irregular breathing pattern; however the tracking error up to 3.5 mm was observed when a 15 degree phase shift was introduced. Conclusion: The XRV-124 system is able to provide 3D and 4D targeting accuracy for CyberKnife delivery with Synchrony. The experimental results showed sub-millimeter delivery in phantom with excellent correlation in target to breathing motion. The accuracy was degraded when irregular motion and phase shift was introduced.

  7. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    Science.gov (United States)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  8. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    Science.gov (United States)

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  9. Robotic assisted laparoscopic colectomy.

    LENUS (Irish Health Repository)

    Pandalai, S

    2010-06-01

    Robotic surgery has evolved over the last decade to compensate for limitations in human dexterity. It avoids the need for a trained assistant while decreasing error rates such as perforations. The nature of the robotic assistance varies from voice activated camera control to more elaborate telerobotic systems such as the Zeus and the Da Vinci where the surgeon controls the robotic arms using a console. Herein, we report the first series of robotic assisted colectomies in Ireland using a voice activated camera control system.

  10. A study on the sensitivity of photogrammetric camera calibration and stitching

    CSIR Research Space (South Africa)

    De Villiers, J

    2014-11-01

    Full Text Available This paper presents a detailed simulation study of an automated robotic photogrammetric camera calibration system. The system performance was tested for sensitivity with regard to noise in the robot movement, camera mounting and image processing...

  11. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  12. Fuzzy Mobile-Robot Positioning in Intelligent Spaces Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    David Herrero

    2011-11-01

    Full Text Available This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using Wireless Sensor Networks (WSNs. The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.

  13. The development of advanced robotics technology in high radiation environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs.

  14. The development of advanced robotics technology in high radiation environment

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo.

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs

  15. Real-time camera-based face detection using a modified LAMSTAR neural network system

    Science.gov (United States)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  16. Muscle emulation with DC motor and neural networks for biped robots.

    Science.gov (United States)

    Serhan, Hayssam; Nasr, Chaiban G; Henaff, Patrick

    2010-08-01

    This paper shows how to use a DC motor and its PID controller, to behave analogously to a muscle. A model of the muscle that has been learned by a NNARX (Neural Network Auto Regressive eXogenous) structure is used. The PID parameters are tuned by an MLP Network with a special indirect online learning algorithm. The calculation of the learning algorithm is performed based on a mathematical equation of the DC motor or with a Neural Network identification of the motor. For each of the two algorithms, the output of the muscle model is used as a reference for the DC motor control loop. The results show that we succeeded in forcing the physical system to behave in the same way as the muscle model with acceptable margin of error. An implementation in the knees of a simulated biped robot is realized. Simulation compares articular trajectories with and without the muscle emulator and shows that with muscle emulator, articular trajectories become closer to the human being ones and that total power consumption is reduced.

  17. Video-based peer feedback through social networking for robotic surgery simulation: a multicenter randomized controlled trial.

    Science.gov (United States)

    Carter, Stacey C; Chiang, Alexander; Shah, Galaxy; Kwan, Lorna; Montgomery, Jeffrey S; Karam, Amer; Tarnay, Christopher; Guru, Khurshid A; Hu, Jim C

    2015-05-01

    To examine the feasibility and outcomes of video-based peer feedback through social networking to facilitate robotic surgical skill acquisition. The acquisition of surgical skills may be challenging for novel techniques and/or those with prolonged learning curves. Randomized controlled trial involving 41 resident physicians performing the Tubes (Da Vinci Intuitive Surgical, Sunnyvale, CA) simulator exercise with versus without peer feedback of video-recorded performance through a social networking Web page. Data collected included simulator exercise score, time to completion, and comfort and satisfaction with robotic surgery simulation. There were no baseline differences between the intervention group (n = 20) and controls (n = 21). The intervention group showed improvement in mean scores from session 1 to sessions 2 and 3 (60.7 vs 75.5, P feedback subjects were more comfortable with robotic surgery than controls (90% vs 62%, P = 0.021) and expressed greater satisfaction with the learning experience (100% vs 67%, P = 0.014). Of the intervention subjects, 85% found that peer feedback was useful and 100% found it effective. Video-based peer feedback through social networking appears to be an effective paradigm for surgical education and accelerates the robotic surgery learning curve during simulation.

  18. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    Science.gov (United States)

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  19. Network analysis of surgical innovation: Measuring value and the virality of diffusion in robotic surgery.

    Directory of Open Access Journals (Sweden)

    George Garas

    Full Text Available Existing surgical innovation frameworks suffer from a unifying limitation, their qualitative nature. A rigorous approach to measuring surgical innovation is needed that extends beyond detecting simply publication, citation, and patent counts and instead uncovers an implementation-based value from the structure of the entire adoption cascades produced over time by diffusion processes. Based on the principles of evidence-based medicine and existing surgical regulatory frameworks, the surgical innovation funnel is described. This illustrates the different stages through which innovation in surgery typically progresses. The aim is to propose a novel and quantitative network-based framework that will permit modeling and visualizing innovation diffusion cascades in surgery and measuring virality and value of innovations.Network analysis of constructed citation networks of all articles concerned with robotic surgery (n = 13,240, Scopus® was performed (1974-2014. The virality of each cascade was measured as was innovation value (measured by the innovation index derived from the evidence-based stage occupied by the corresponding seed article in the surgical innovation funnel. The network-based surgical innovation metrics were also validated against real world big data (National Inpatient Sample-NIS®.Rankings of surgical innovation across specialties by cascade size and structural virality (structural depth and width were found to correlate closely with the ranking by innovation value (Spearman's rank correlation coefficient = 0.758 (p = 0.01, 0.782 (p = 0.008, 0.624 (p = 0.05, respectively which in turn matches the ranking based on real world big data from the NIS® (Spearman's coefficient = 0.673;p = 0.033.Network analysis offers unique new opportunities for understanding, modeling and measuring surgical innovation, and ultimately for assessing and comparing generative value between different specialties. The novel surgical innovation metrics

  20. Network analysis of surgical innovation: Measuring value and the virality of diffusion in robotic surgery.

    Science.gov (United States)

    Garas, George; Cingolani, Isabella; Panzarasa, Pietro; Darzi, Ara; Athanasiou, Thanos

    2017-01-01

    Existing surgical innovation frameworks suffer from a unifying limitation, their qualitative nature. A rigorous approach to measuring surgical innovation is needed that extends beyond detecting simply publication, citation, and patent counts and instead uncovers an implementation-based value from the structure of the entire adoption cascades produced over time by diffusion processes. Based on the principles of evidence-based medicine and existing surgical regulatory frameworks, the surgical innovation funnel is described. This illustrates the different stages through which innovation in surgery typically progresses. The aim is to propose a novel and quantitative network-based framework that will permit modeling and visualizing innovation diffusion cascades in surgery and measuring virality and value of innovations. Network analysis of constructed citation networks of all articles concerned with robotic surgery (n = 13,240, Scopus®) was performed (1974-2014). The virality of each cascade was measured as was innovation value (measured by the innovation index) derived from the evidence-based stage occupied by the corresponding seed article in the surgical innovation funnel. The network-based surgical innovation metrics were also validated against real world big data (National Inpatient Sample-NIS®). Rankings of surgical innovation across specialties by cascade size and structural virality (structural depth and width) were found to correlate closely with the ranking by innovation value (Spearman's rank correlation coefficient = 0.758 (p = 0.01), 0.782 (p = 0.008), 0.624 (p = 0.05), respectively) which in turn matches the ranking based on real world big data from the NIS® (Spearman's coefficient = 0.673;p = 0.033). Network analysis offers unique new opportunities for understanding, modeling and measuring surgical innovation, and ultimately for assessing and comparing generative value between different specialties. The novel surgical innovation metrics developed may

  1. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    Science.gov (United States)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  2. An Improved Recurrent Neural Network for Complex-Valued Systems of Linear Equation and Its Application to Robotic Motion Tracking.

    Science.gov (United States)

    Ding, Lei; Xiao, Lin; Liao, Bolin; Lu, Rongbo; Peng, Hua

    2017-01-01

    To obtain the online solution of complex-valued systems of linear equation in complex domain with higher precision and higher convergence rate, a new neural network based on Zhang neural network (ZNN) is investigated in this paper. First, this new neural network for complex-valued systems of linear equation in complex domain is proposed and theoretically proved to be convergent within finite time. Then, the illustrative results show that the new neural network model has the higher precision and the higher convergence rate, as compared with the gradient neural network (GNN) model and the ZNN model. Finally, the application for controlling the robot using the proposed method for the complex-valued systems of linear equation is realized, and the simulation results verify the effectiveness and superiorness of the new neural network for the complex-valued systems of linear equation.

  3. Learning for intelligent mobile robots

    Science.gov (United States)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A

  4. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  5. Robotized transcranial magnetic stimulation

    CERN Document Server

    Richter, Lars

    2014-01-01

    Presents new, cutting-edge algorithms for robot/camera calibration, sensor fusion and sensor calibration Explores the main challenges for accurate coil positioning, such as head motion, and outlines how active robotic motion compensation can outperform hand-held solutions Analyzes how a robotized system in medicine can alleviate concerns with a patient's safety, and presents a novel fault-tolerant algorithm (FTA) sensor for system safety

  6. A design philosophy for multi-layer neural networks with applications to robot control

    Science.gov (United States)

    Vadiee, Nader; Jamshidi, MO

    1989-01-01

    A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.

  7. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Husan Vokhidov

    2016-12-01

    Full Text Available Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS, installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.

  8. Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors.

    Science.gov (United States)

    Lee, Kwan Woo; Yoon, Hyo Sik; Song, Jong Min; Park, Kang Ryoung

    2018-03-23

    Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.

  9. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network.

    Science.gov (United States)

    Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung

    2016-12-16

    Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.

  10. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network

    Science.gov (United States)

    Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung

    2016-01-01

    Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods. PMID:27999301

  11. Monitoring landscape-level distribution and migration Phenology of Raptors using a volunteer camera-trap network

    Science.gov (United States)

    Jachowski, David S.; Katzner, Todd; Rodrigue, Jane L.; Ford, W. Mark

    2015-01-01

    Conservation of animal migratory movements is among the most important issues in wildlife management. To address this need for landscape-scale monitoring of raptor populations, we developed a novel, baited photographic observation network termed the “Appalachian Eagle Monitoring Program” (AEMP). During winter months of 2008–2012, we partnered with professional and citizen scientists in 11 states in the United States to collect approximately 2.5 million images. To our knowledge, this represents the largest such camera-trap effort to date. Analyses of data collected in 2011 and 2012 revealed complex, often species-specific, spatial and temporal patterns in winter raptor movement behavior as well as spatial and temporal resource partitioning between raptor species. Although programmatic advances in data analysis and involvement are needed, the continued growth of the program has the potential to provide a long-term, cost-effective, range-wide monitoring tool for avian and terrestrial scavengers during the winter season. Perhaps most importantly, by relying heavily on citizen scientists, AEMP has the potential to improve long-term interest and support for raptor conservation and serve as a model for raptor conservation programs in other portions of the world.

  12. The extra-atmospheric masses of small meteoric fireballs from the Prairie and the Canadian camera networks.

    Science.gov (United States)

    Popelenskaya, N.

    2007-08-01

    Existing methods of definition of extra-atmospheric masses of small meteoric bodies according to supervision of their movement in an atmosphere contain the certain arbitrariness. Vigorous attempts to overcome a divergence of results of calculations on the basis of various approaches often lead to physically incorrect conclusions. The output consists in patient accumulation of estimations and calculations for gradual elimination uncertainties. The equations of meteoric physics include two dimensionless parameters - factor ablation ? and factor of braking ?. In work are cited the data processing supervision of small meteors Prairie and Canadian networks, by a finding of values of parameters ? and ? with use of a method of the least squares. Also values of heights blackout a meteor which turn out from conditions of full destruction or final braking with use of the received values of ? and ? are considered. In prevailing number of supervision for considered meteors braking is insignificant. Results of calculations of height of blackout meteors confirm suitability of the approximations used in work for the description of movement of small meteors. In work results of calculation of extra-atmospheric masses with use of factor of braking for meteoric bodies of the spherical form with density of an ice and a stone are presented. On the basis of the received results discrepancy of photometric masses to values of masses of the input, received on observable braking proves to be true. In most cases received magnitude of masses essentially less photometric masses. Processing of supervision of small meteors Prairie and Canadian camera networks has shown, that the so-called photometric mass mismatches values of mass of the input, defined on observable braking. Acceptance of photometric value as the mass defining braking of a body, leads to obviously underestimated values of density of substance meteoric body. The further researches on specification of interpretation of supervision

  13. Development of a platform to combine sensor networks and home robots to improve fall detection in the home environment.

    Science.gov (United States)

    Della Toffola, Luca; Patel, Shyamal; Chen, Bor-rong; Ozsecen, Yalgin M; Puiatti, Alessandro; Bonato, Paolo

    2011-01-01

    Over the last decade, significant progress has been made in the development of wearable sensor systems for continuous health monitoring in the home and community settings. One of the main areas of application for these wearable sensor systems is in detecting emergency events such as falls. Wearable sensors like accelerometers are increasingly being used to monitor daily activities of individuals at a risk of falls, detect emergency events and send alerts to caregivers. However, such systems tend to have a high rate of false alarms, which leads to low compliance levels. Home robots can enable caregivers with the ability to quickly make an assessment and intervene if an emergency event is detected. This can provide an additional layer for detecting false positives, which can lead to improve compliance. In this paper, we present preliminary work on the development of a fall detection system based on a combination sensor networks and home robots. The sensor network architecture comprises of body worn sensors and ambient sensors distributed in the environment. We present the software architecture and conceptual design home robotic platform. We also perform preliminary characterization of the sensor network in terms of latencies and battery lifetime.

  14. Fused Smart Sensor Network for Multi-Axis Forward Kinematics Estimation in Industrial Robots

    OpenAIRE

    Rodriguez-Donate, Carlos; Osornio-Rios, Roque Alfredo; Rivera-Guillen, Jesus Rooney; Romero-Troncoso, Rene de Jesus

    2011-01-01

    Flexible manipulator robots have a wide industrial application. Robot performance requires sensing its position and orientation adequately, known as forward kinematics. Commercially available, motion controllers use high-resolution optical encoders to sense the position of each joint which cannot detect some mechanical deformations that decrease the accuracy of the robot position and orientation. To overcome those problems, several sensor fusion methods have been proposed but at expenses of h...

  15. Mobile robot nonlinear feedback control based on Elman neural network observer

    Directory of Open Access Journals (Sweden)

    Khaled Al-Mutib

    2015-12-01

    Full Text Available This article presents a new approach to control a wheeled mobile robot without velocity measurement. The controller developed is based on kinematic model as well as dynamics model to take into account parameters of dynamics. These parameters related to dynamic equations are identified using a proposed methodology. Input–output feedback linearization is considered with a slight modification in the mathematical expressions to implement the dynamic controller and analyze the nonlinear internal behavior. The developed controllers require sensors to obtain the states needed for the closed-loop system. However, some states may not be available due to the absence of the sensors because of the cost, the weight limitation, reliability, induction of errors, failure, and so on. Particularly, for the velocity measurements, the required accuracy may not be achieved in practical applications due to the existence of significant errors induced by stochastic or cyclical noise. In this article, Elman neural network is proposed to work as an observer to estimate the velocity needed to complete the full state required for the closed-loop control and account for all the disturbances and model parameter uncertainties. Different simulations are carried out to demonstrate the feasibility of the approach in tracking different reference trajectories in comparison with other paradigms.

  16. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications.

    Science.gov (United States)

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Nonomura, Yutaka; Muroyama, Masanori

    2017-08-28

    Robot tactile sensation can enhance human-robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as "sensor platform LSI") as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated.

  17. Improved Deep Belief Networks (IDBN Dynamic Model-Based Detection and Mitigation for Targeted Attacks on Heavy-Duty Robots

    Directory of Open Access Journals (Sweden)

    Lianpeng Li

    2018-04-01

    Full Text Available In recent years, the robots, especially heavy-duty robots, have become the hardest-hit areas for targeted attacks. These attacks come from both the cyber-domain and the physical-domain. In order to improve the security of heavy-duty robots, this paper proposes a detection and mitigation mechanism which based on improved deep belief networks (IDBN and dynamic model. The detection mechanism consists of two parts: (1 IDBN security checks, which can detect targeted attacks from the cyber-domain; (2 Dynamic model and security detection, used to detect the targeted attacks which can possibly lead to a physical-domain damage. The mitigation mechanism was established on the base of the detection mechanism and could mitigate transient and discontinuous attacks. Moreover, a test platform was established to carry out the performance evaluation test for the proposed mechanism. The results show that, the detection accuracy for the attack of the cyber-domain of IDBN reaches 96.2%, and the detection accuracy for the attack of physical-domain control commands reaches 94%. The performance evaluation test has verified the reliability and high efficiency of the proposed detection and mitigation mechanism for heavy-duty robots.

  18. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications †

    Science.gov (United States)

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Muroyama, Masanori

    2017-01-01

    Robot tactile sensation can enhance human–robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as “sensor platform LSI”) as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated. PMID:29061954

  19. Exploring the effects of dimensionality reduction in deep networks for force estimation in robotic-assisted surgery

    Science.gov (United States)

    Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia

    2016-03-01

    Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.

  20. Structured Tracking for Safety, Security, and Privacy: Algorithms for Fusing Noisy Estimates from Sensor, Robot, and Camera Networks

    Science.gov (United States)

    2009-07-23

    classification. Typical region-based approaches explore applying features like Haar wavelets which generate statistics about regions of pixel [199...evaluate performance. We experimented in our lab where we could control lighting conditions and we could explicitly setup pathological examples. We then

  1. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  2. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    Science.gov (United States)

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual

  3. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study

    Directory of Open Access Journals (Sweden)

    Nocchi Federico

    2012-07-01

    Full Text Available Abstract Background The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb and non-biological (abstract object movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. Methods A visual functional Magnetic Resonance Imaging (fMRI task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. Results The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes. Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. Conclusions This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain’s ability to assimilate abstract object movements with human motor gestures. In both conditions

  4. Autonomous military robotics

    CERN Document Server

    Nath, Vishnu

    2014-01-01

    This SpringerBrief reveals the latest techniques in computer vision and machine learning on robots that are designed as accurate and efficient military snipers. Militaries around the world are investigating this technology to simplify the time, cost and safety measures necessary for training human snipers. These robots are developed by combining crucial aspects of computer science research areas including image processing, robotic kinematics and learning algorithms. The authors explain how a new humanoid robot, the iCub, uses high-speed cameras and computer vision algorithms to track the objec

  5. Technology of disaster response robot and issues

    International Nuclear Information System (INIS)

    Tadokoro, Satoshi

    2013-01-01

    The needs, function structure , ability of disaster response robot are stated. Robots are classified by move mode such as Unmanned Ground Vehicle (UGV), Legged Robots, Exoskeleton, Unmanned Aerial Vehicle (UAV), Wall Climbing Robots, robots for narrow space. Quince, disaster response robot, collected at first information in the building of Fukushima Daiichi Nuclear Power Station. Functions of rescue robots and technical problems under disaster conditions, shape and characteristics of robots and TRL, PackBot, Pelican, Quince, scope camera, and three-dimensional map made by Quince are illustrated. (S.Y.)

  6. Contextual Student Learning through Authentic Asteroid Research Projects using a Robotic Telescope Network

    Science.gov (United States)

    Hoette, Vivian L.; Puckett, Andrew W.; Linder, Tyler R.; Heatherly, Sue Ann; Rector, Travis A.; Haislip, Joshua B.; Meredith, Kate; Caughey, Austin L.; Brown, Johnny E.; McCarty, Cameron B.; Whitmore, Kevin T.

    2015-11-01

    Skynet is a worldwide robotic telescope network operated by the University of North Carolina at Chapel Hill with active observing sites on 3 continents. The queue-based observation request system is simple enough to be used by middle school students, but powerful enough to supply data for research scientists. The Skynet Junior Scholars program, funded by the NSF, has teamed up with professional astronomers to engage students from middle school to undergraduates in authentic research projects, from target selection through image analysis and publication of results. Asteroid research is a particularly fruitful area for youth collaboration that reinforces STEM education standards and can allow students to make real contributions to scientific knowledge, e.g., orbit refinement through astrometric submissions to the Minor Planet Center. We have created a set of projects for youth to: 1. Image an asteroid, make a movie, and post it to a gallery; 2. Measure the asteroid’s apparent motion using the Afterglow online image processor; and 3. Image asteroids from two or more telescopes simultaneously to demonstrate parallax. The apparent motion and parallax projects allow students to estimate the distance to their asteroid, as if they were the discoverer of a brand new object in the solar system. Older students may take on advanced projects, such as analyzing uncertainties in asteroid orbital parameters; studying impact probabilities of known objects; observing time-sensitive targets such as Near Earth Asteroids; and even discovering brand new objects in the solar system.Images are acquired from among seven Skynet telescopes in North Carolina, California, Wisconsin, Canada, Australia, and Chile, as well as collaborating observatories such as WestRock in Columbus, Georgia; Stone Edge in El Verano, California; and Astronomical Research Institute in Westfield, Illinois.

  7. Orbit Refinement of Asteroids and Comets Using a Robotic Telescope Network

    Science.gov (United States)

    Lantz Caughey, Austin; Brown, Johnny; Puckett, Andrew W.; Hoette, Vivian L.; Johnson, Michael; McCarty, Cameron B.; Whitmore, Kevin; UNC-Chapel Hill SKYNET Team

    2016-01-01

    We report on a multi-semester project to refine the orbits of asteroids and comets in our Solar System. One of the newest fields of research for undergraduate Astrophysics students at Columbus State University is that of asteroid astrometry. By measuring the positions of an asteroid in a set of images, we can reduce the overall uncertainty in the accepted orbital parameters of that object. These measurements, using our WestRock Observatory (WRO) and several other telescopes around the world, are being published through the Minor Planet Center (MPC) and benefit the global community.Three different methods are used to obtain these observations. First, we use our own 24-inch telescope at WRO, located in at CSU's Coca-Cola Space Science Center in downtown Columbus, Georgia . Second, we have access to data from the 20-inch telescope at Stone Edge Observatory in El Verano, California. Finally, we may request images remotely using Skynet, an online worldwide network of robotic telescopes. Our primary and long-time collaborator on Skynet has been the "41-inch" reflecting telescope at Yerkes Observatory in Williams Bay, Wisconsin. Thus far, we have used these various telescopes to refine the orbits of more than 15 asteroids and comets. We have also confirmed the resulting reduction in orbit-model uncertainties using Monte Carlo simulations and orbit visualizations, using Find_Orb and OrbitMaster software, respectively.Before any observatory site can be used for official orbit refinement projects, it must first become a trusted source of astrometry data for the MPC. We have therefore obtained Observatory Codes not only for our own WestRock Observatory (W22), but also for 3 Skynet telescopes that we may use in the future: Dark Sky Observatory in Boone, North Carolina (W38) Hume Observatory in Santa Rosa, California (U54) and Athabasca University Geophysical Observatory in Athabasca, Alberta, Canada (U96).

  8. Intelligent navigation and accurate positioning of an assist robot in indoor environments

    Science.gov (United States)

    Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke

    2017-12-01

    Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.

  9. Consensus Formation Control for a Class of Networked Multiple Mobile Robot Systems

    Directory of Open Access Journals (Sweden)

    Long Sheng

    2012-01-01

    for investigating the sufficient conditions to linear control gain design for the system with constant time delays. Simulation results as well as experimental studies on Pioneer 3 series mobile robots are shown to verify the effectiveness of the proposed approach.

  10. Wireless robot teleoperation via internet using IPv6 over a bluetooth personal area network

    Directory of Open Access Journals (Sweden)

    Carlos Araque Rodríguez

    2010-01-01

    Full Text Available En este artículo se presenta el diseño, construcción y pruebas de un sistema que permite la manipulación y visualización del robot Microbot Teachmover usando una conexión inalámbrica Bluetooth con dirección IPv6, brindando la posibilidad de manejar el robot desde diferentes escenarios: desde un dispositivo móvil que se encuentra en la misma piconet del robot; desde un computador que se encuentre en la misma piconet del robot y desde un computador que se encuentre conectado a Internet con una dirección IPv6.

  11. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  12. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  13. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  14. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  15. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  16. Aerosol Robotic Network (AERONET) Version 3 Aerosol Optical Depth and Inversion Products

    Science.gov (United States)

    Giles, D. M.; Holben, B. N.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Sorokin, M. G.; Slutsker, I.

    2017-12-01

    The Aerosol Robotic Network (AERONET) surface-based aerosol optical depth (AOD) database has been a principal component of many Earth science remote sensing applications and modelling for more than two decades. During this time, the AERONET AOD database had utilized a semiautomatic quality assurance approach (Smirnov et al., 2000). Data quality automation developed for AERONET Version 3 (V3) was achieved by augmenting and improving upon the combination of Version 2 (V2) automatic and manual procedures to provide a more refined near real time (NRT) and historical worldwide database of AOD. The combined effect of these new changes provides a historical V3 AOD Level 2.0 data set comparable to V2 Level 2.0 AOD. The recently released V3 Level 2.0 AOD product uses Level 1.5 data with automated cloud screening and quality controls and applies pre-field and post-field calibrations and wavelength-dependent temperature characterizations. For V3, the AERONET aerosol retrieval code inverts AOD and almucantar sky radiances using a full vector radiative transfer called Successive ORDers of scattering (SORD; Korkin et al., 2017). The full vector code allows for potentially improving the real part of the complex index of refraction and the sphericity parameter and computing the radiation field in the UV (e.g., 380nm) and degree of linear depolarization. Effective lidar ratio and depolarization ratio products are also available with the V3 inversion release. Inputs to the inversion code were updated to the accommodate H2O, O3 and NO2 absorption to be consistent with the computation of V3 AOD. All of the inversion products are associated with estimated uncertainties that include the random error plus biases due to the uncertainty in measured AOD, absolute sky radiance calibration, and retrieved MODIS BRDF for snow-free and snow covered surfaces. The V3 inversion products use the same data quality assurance criteria as V2 inversions (Holben et al. 2006). The entire AERONET V3

  17. Robot-assisted general surgery.

    Science.gov (United States)

    Hazey, Jeffrey W; Melvin, W Scott

    2004-06-01

    With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.

  18. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  19. An adaptive PID like controller using mix locally recurrent neural network for robotic manipulator with variable payload.

    Science.gov (United States)

    Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P

    2016-05-01

    Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  1. Design and Implementation of Fire Extinguisher Robot with Robotic Arm

    Directory of Open Access Journals (Sweden)

    Memon Abdul Waris

    2018-01-01

    Full Text Available Robot is a device, which performs human task or behave like a human-being. It needs expertise skills and complex programming to design. For designing a fire fighter robot, many sensors and motors were used. User firstly send robot to an affected area, to get live image of the field with the help of mobile camera via Wi-Fi using IP camera application to laptop. If any signs of fire shown in image, user direct robot in that particular direction for confirmation. Fire sensor and temperature sensor detects and measures the reading, after confirmation robot sprinkle water on affected field. During extinguish process if any obstacle comes in between the prototype and the affected area the ultrasonic sensor detects the obstacle, in response the robotic arm moves to pick and place that obstacle to another location for clearing the path. Meanwhile if any poisonous gas is present, the gas sensor detects and indicates by making alarm.

  2. Exploratorium: Robots.

    Science.gov (United States)

    Brand, Judith, Ed.

    2002-01-01

    This issue of Exploratorium Magazine focuses on the topic robotics. It explains how to make a vibrating robotic bug and features articles on robots. Contents include: (1) "Where Robot Mice and Robot Men Run Round in Robot Towns" (Ray Bradbury); (2) "Robots at Work" (Jake Widman); (3) "Make a Vibrating Robotic Bug" (Modesto Tamez); (4) "The Robot…

  3. Surface temperature monitoring by integrating satellite data and ground thermal camera network on Solfatara Crater in Campi Flegrei volcanic area (Italy)

    Science.gov (United States)

    Buongiorno, M. F.; Musacchio, M.; Silvestri, M.; Vilardo, G.; Sansivero, F.; caPUTO, T.; bellucci Sessa, E.; Pieri, D. C.

    2017-12-01

    Current satellite missions providing imagery in the TIR region at high spatial resolution offer the possibility to estimate the surface temperature in volcanic area contributing in understanding the ongoing phenomena to mitigate the volcanic risk when population are exposed. The Campi Flegrei volcanic area (Italy) is part of the Napolitan volcanic district and its monitored by INGV ground networks including thermal cameras. TIRS on LANDSAT and ASTER on NASA-TERRA provide thermal IR channels to monitor the evolution of the surface temperatures on Campi Flegrei area. The spatial resolution of the TIR data is 100 m for LANDSAT8 and 90 m for ASTER, temporal resolution is 16 days for both satellites. TIRNet network has been developed by INGV for long-term volcanic surveillance of the Flegrei Fields through the acquisition of thermal infrared images. The system is currently comprised of 5 permanent stations equipped with FLIR A645SC thermo cameras with a 640x480 resolution IR sensor. To improve the systematic use of satellite data in the monitor procedures of Volcanic Observatories a suitable integration and validation strategy is needed, also considering that current satellite missions do not provide TIR data with optimal characteristics to observe small thermal anomalies that may indicate changes in the volcanic activity. The presented procedure has been applied to the analysis of Solfatara Crater and is based on 2 different steps: 1) parallel processing chains to produce ground temperature data both from satellite and ground cameras; 2) data integration and comparison. The ground cameras images generally correspond to views of portion of the crater slopes characterized by significant thermal anomalies due to fumarole fields. In order to compare the satellite and ground cameras it has been necessary to take into account the observation geometries. All thermal images of the TIRNet have been georeferenced to the UTM WGS84 system, a regular grid of 30x30 meters has been

  4. Perspectives of construction robots

    Science.gov (United States)

    Stepanov, M. A.; Gridchin, A. M.

    2018-03-01

    This article is an overview of construction robots features, based on formulating the list of requirements for different types of construction robots in relation to different types of construction works.. It describes a variety of construction works and ways to construct new or to adapt existing robot designs for a construction process. Also, it shows the prospects of AI-controlled machines, implementation of automated control systems and networks on construction sites. In the end, different ways to develop and improve, including ecological aspect, the construction process through the wide robotization, creating of data communication networks and, in perspective, establishing of fully AI-controlled construction complex are formulated.

  5. Hand/Eye Coordination For Fine Robotic Motion

    Science.gov (United States)

    Lokshin, Anatole M.

    1992-01-01

    Fine motions of robotic manipulator controlled with help of visual feedback by new method reducing position errors by order of magnitude. Robotic vision subsystem includes five cameras: three stationary ones providing wide-angle views of workspace and two mounted on wrist of auxiliary robot arm. Stereoscopic cameras on arm give close-up views of object and end effector. Cameras measure errors between commanded and actual positions and/or provide data for mapping between visual and manipulator-joint-angle coordinates.

  6. Teleautonomous Control on Rescue Robot Prototype

    Directory of Open Access Journals (Sweden)

    Son Kuswadi

    2012-12-01

    Full Text Available Robot application in disaster area can help responder team to save victims. In order to finish task, robot must have flexible movement mechanism so it can pass through uncluttered area. Passive linkage can be used on robot chassis so it can give robot flexibility. On physical experiments, robot is succeeded to move through gravels and 5 cm obstacle. Rescue robot also has specialized control needs. Robot must able to be controlled remotely. It also must have ability to move autonomously. Teleautonomous control method is combination between those methods. It can be concluded from experiments that on teleoperation mode, operator must get used to see environment through robot’s camera. While on autonomous mode, robot is succeeded to avoid obstacle and search target based on sensor reading and controller program. On teleautonomous mode, robot can change control mode by using bluetooth communication for data transfer, so robot control will be more flexible.

  7. Development Of A Mobile Robot As A Test Bed For Tele-Presentation

    Directory of Open Access Journals (Sweden)

    Diogenes Armando D. Pascua

    2016-01-01

    Full Text Available In this paper a human-sized tracked wheel robot with a large payload capacity for tele-presentation is presented. The robot is equipped with different sensors for obstacle avoidance and localization. A high definition web camera installed atop a pan and tilt assembly was in place as a remote environment feedback for users. An LCD monitor provides the visual display of the operator in the remote environment using the standard Skype teleconferencing software. Remote control was done via the internet through the free Teamviewer VNC remote desktop software. Moreover, this paper presents the design details, fabrication and evaluation of individual components. Core mobile robot movement and navigational controls were developed and tested. The effectiveness of the mobile robot as a test bed for tele-presentation were evaluated and analyzed by way of its real time response and time delay effects of the network.

  8. Development of a Mobile Robot as a Test Bed for Tele-Presentation

    Directory of Open Access Journals (Sweden)

    Diogenes Armando D. Pascua

    2016-05-01

    Full Text Available In this paper a human-sized tracked wheel robot with a large payload capacity for tele-presentation is presented. The robot is equipped with different sensors for obstacle avoidance and localization. A high definition web camera installed atop a pan and tilt assembly was in place as a remote environment feedback for users. An LCD monitor provides the visual display of the operator in the remote environment using the standard Skype teleconferencing software. Remote control was done via the internet through the free Teamviewer VNC remote desktop software. Moreover, this paper presents the design details, fabrication and evaluation of individual components. Core mobile robot movement and navigational controls were developed and tested. The effectiveness of the mobile robot as a test bed for tele-presentation were evaluated and analyzed by way of its real time response and time delay effects of the network

  9. Supervised Autonomy for Exploration and Mobile Manipulation in Rough Terrain with a Centaur-like Robot

    Directory of Open Access Journals (Sweden)

    Max Schwarz

    2016-10-01

    Full Text Available Planetary exploration scenarios illustrate the need for autonomous robots that are capable to operate in unknown environments without direct human interaction. At the DARPA Robotics Challenge, we demonstrated that our Centaur-like mobile manipulation robot Momaro can solve complex tasks when teleoperated. Motivated by the DLR SpaceBot Cup 2015, where robots should explore a Mars-like environment, find and transport objects, take a soil sample, and perform assembly tasks, we developed autonomous capabilities for Momaro. Our robot perceives and maps previously unknown, uneven terrain using a 3D laser scanner. Based on the generated height map, we assess drivability, plan navigation paths, and execute them using the omnidirectional drive. Using its four legs, the robot adapts to the slope of the terrain. Momaro perceives objects with cameras, estimates their pose, and manipulates them with its two arms autonomously. For specifying missions, monitoring mission progress, on-the-fly reconfiguration, and teleoperation, we developed a ground station with suitable operator interfaces. To handle network communication interruptions and latencies between robot and ground station, we implemented a robust network layer for the ROS middleware. With the developed system, our team NimbRo Explorer solved all tasks of the DLR SpaceBot Camp 2015. We also discuss the lessons learned from this demonstration.

  10. Deep learning with convolutional neural networks: a resource for the control of robotic prosthetic hands via electromyography

    Directory of Open Access Journals (Sweden)

    Manfredo Atzori

    2016-09-01

    Full Text Available Motivation: Natural control methods based on surface electromyography and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications and commercial prostheses are in the best case capable to offer natural control for only a few movements. Objective: In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its capabilities for the natural control of robotic hands via surface electromyography by providing a baseline on a large number of intact and amputated subjects. Methods: We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 hand amputated subjects. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets.Results: The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods but lower than the results obtained with the best reference methods in our tests. Significance: The results show that convolutional neural networks with a very simple architecture can produce accuracy comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters can be fundamental for the analysis of surface electromyography data. Finally, the results suggest that deeper and more complex networks may increase dexterous control robustness, thus contributing to bridge the gap between the market and scientific research

  11. Robot Actors, Robot Dramaturgies

    DEFF Research Database (Denmark)

    Jochum, Elizabeth

    This paper considers the use of tele-operated robots in live performance. Robots and performance have long been linked, from the working androids and automata staged in popular exhibitions during the nineteenth century and the robots featured at Cybernetic Serendipity (1968) and the World Expo...

  12. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  13. An Improved Indoor Positioning System Using RGB-D Cameras and Wireless Networks for Use in Complex Environments

    Directory of Open Access Journals (Sweden)

    Jaime Duque Domingo

    2017-10-01

    Full Text Available This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps, delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.

  14. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  15. Intelligent robotic tracker

    Science.gov (United States)

    Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.

    1987-01-01

    An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.

  16. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  17. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  18. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  19. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  20. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  1. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  2. A new approach to investigate an eruptive paroxysmal sequence using camera and strainmeter networks: Lessons from the 3-5 December 2015 activity at Etna volcano

    Science.gov (United States)

    Bonaccorso, A.; Calvari, S.

    2017-10-01

    Explosive sequences are quite common at basaltic and andesitic volcanoes worldwide. Studies aimed at short-term forecasting are usually based on seismic and ground deformation measurements, which can be used to constrain the source region and quantify the magma volume involved in the eruptive process. However, during single episodes of explosive sequences, integration of camera remote sensing and geophysical data are scant in literature, and the total volume of pyroclastic products is not determined. In this study, we calculate eruption parameters for four powerful lava fountains occurring at the main and oldest Mt. Etna summit crater, Voragine, between 3 and 5 December 2015. These episodes produced impressive eruptive columns and plume clouds, causing lapilli and ash fallout to more than 100 km away. We analyse these paroxysmal events by integrating the images recorded by a network of monitoring cameras and the signals from three high-precision borehole strainmeters. From the camera images we calculated the total erupted volume of fluids (gas plus pyroclastics), inferring amounts from 1.9 ×109 m3 (first event) to 0.86 ×109 m3 (third event). Strain changes recorded during the first and most powerful event were used to constrain the depth of the source. The ratios of strain changes recorded at two stations during the four lava fountains were used to constrain the pyroclastic fraction for each eruptive event. The results revealed that the explosive sequence was characterized by a decreasing trend of erupted pyroclastics with time, going from 41% (first event) to 13% (fourth event) of the total erupted pyroclastic volume. Moreover, the volume ratio fluid/pyroclastic decreased markedly in the fourth and last event. To the best of our knowledge, this is the first time ever that erupted volumes of both fluid and pyroclastics have been estimated for an explosive sequence from a monitoring system using permanent cameras and high precision strainmeters. During future

  3. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  4. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  5. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  6. Intelligent Control of Welding Gun Pose for Pipeline Welding Robot Based on Improved Radial Basis Function Network and Expert System

    Directory of Open Access Journals (Sweden)

    Jingwen Tian

    2013-02-01

    Full Text Available Since the control system of the welding gun pose in whole-position welding is complicated and nonlinear, an intelligent control system of welding gun pose for a pipeline welding robot based on an improved radial basis function neural network (IRBFNN and expert system (ES is presented in this paper. The structure of the IRBFNN is constructed and the improved genetic algorithm is adopted to optimize the network structure. This control system makes full use of the characteristics of the IRBFNN and the ES. The ADXRS300 micro-mechanical gyro is used as the welding gun position sensor in this system. When the welding gun position is obtained, an appropriate pitch angle can be obtained through expert knowledge and the numeric reasoning capacity of the IRBFNN. ARM is used as the controller to drive the welding gun pitch angle step motor in order to adjust the pitch angle of the welding gun in real-time. The experiment results show that the intelligent control system of the welding gun pose using the IRBFNN and expert system is feasible and it enhances the welding quality. This system has wide prospects for application.

  7. Using strategic movement to calibrate a neural compass: a spiking network for tracking head direction in rats and robots.

    Directory of Open Access Journals (Sweden)

    Peter Stratton

    Full Text Available The head direction (HD system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that 'grounding' of modelled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology, and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.

  8. Using strategic movement to calibrate a neural compass: a spiking network for tracking head direction in rats and robots.

    Science.gov (United States)

    Stratton, Peter; Milford, Michael; Wyeth, Gordon; Wiles, Janet

    2011-01-01

    The head direction (HD) system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular) inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that 'grounding' of modelled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology), and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.

  9. Test and Evaluation of a Prototyped Sensor-Camera Network for Persistent Intelligence, Surveillance, and Reconnaissance in Support of Tactical Coalition Networking Environments

    Science.gov (United States)

    2006-06-01

    networks is home automation . Wireless sensor networks can be employed in a home environment similar to the ways they are deployed in environmental...and industrial settings. Home automation provides increased control of home appliances and security. Climate control and security systems are the...most common types of home automation applications. However, as technology 12 has increased, new applications are emerging. For example

  10. Novel robotic systems and future directions

    Directory of Open Access Journals (Sweden)

    Ki Don Chang

    2018-01-01

    Full Text Available Robot-assistance is increasingly used in surgical practice. We performed a nonsystematic literature review using PubMed/MEDLINE and Google for robotic surgical systems and compiled information on their current status. We also used this information to predict future about the direction of robotic systems based on various robotic systems currently being developed. Currently, various modifications are being made in the consoles, robotic arms, cameras, handles and instruments, and other specific functions (haptic feedback and eye tracking that make up the robotic surgery system. In addition, research for automated surgery is actively being carried out. The development of future robots will be directed to decrease the number of incisions and improve precision. With the advent of artificial intelligence, a more practical form of robotic surgery system can be introduced and will ultimately lead to the development of automated robotic surgery system.

  11. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  12. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  13. An industrial robot singular trajectories planning based on graphs and neural networks

    Science.gov (United States)

    Łęgowski, Adrian; Niezabitowski, Michał

    2016-06-01

    Singular trajectories are rarely used because of issues during realization. A method of planning trajectories for given set of points in task space with use of graphs and neural networks is presented. In every desired point the inverse kinematics problem is solved in order to derive all possible solutions. A graph of solutions is made. The shortest path is determined to define required nodes in joint space. Neural networks are used to define the path between these nodes.

  14. The development of advanced robotic technology -The development of advanced robotics for the nuclear industry-

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Bum; Kim, Woong Ki; Park, Soon Yong; Kim, Seung Ho; Kim, Chang Hoi; Hwang, Suk Yeoung; Kim, Byung Soo; Lee, Young Kwang

    1994-07-01

    In this year (the second year of this project), researches and development have been carried out to establish the essential key technologies applied to robot system for nuclear industry. In the area of robot vision, in order to construct stereo vision system necessary to tele-operation, stereo image acquisition camera module and stereo image displayer have been developed. Stereo matching and storing programs have been developed to analyse stereo images. According to the result of tele-operation experiment, operation efficiency has been enhanced about 20% by using the stereo vision system. In a part of object recognition, a tele-operated robot system has been constructed to evaluate the performance of the stereo vision system and to develop the vision algorithm to automate nozzle dam operation. A nuclear fuel rod character recognition system has been developed by using neural network. As a result of perfomance evaluation of the recognition system, 99% recognition rate has been achieved. In the area of sensing and intelligent control, temperature distribution has been measured by using the analysis of thermal image histogram and the inspection algorithm has been developed to determine of the state be normal or abnormal, and the fuzzy controller has been developed to control the compact mobile robot designed for path moving on block-typed path. (Author)

  15. Robot engineering

    International Nuclear Information System (INIS)

    Jung, Seul

    2006-02-01

    This book deals with robot engineering, giving descriptions of robot's history, current tendency of robot field, work and characteristic of industrial robot, essential merit and vector, application of matrix, analysis of basic vector, expression of Denavit-Hartenberg, robot kinematics such as forward kinematics, inverse kinematics, cases of MATLAB program, and motion kinematics, robot kinetics like moment of inertia, centrifugal force and coriolis power, and Euler-Lagrangian equation course plan, SIMULINK position control of robots.

  16. Robot engineering

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Seul

    2006-02-15

    This book deals with robot engineering, giving descriptions of robot's history, current tendency of robot field, work and characteristic of industrial robot, essential merit and vector, application of matrix, analysis of basic vector, expression of Denavit-Hartenberg, robot kinematics such as forward kinematics, inverse kinematics, cases of MATLAB program, and motion kinematics, robot kinetics like moment of inertia, centrifugal force and coriolis power, and Euler-Lagrangian equation course plan, SIMULINK position control of robots.

  17. Construction of multi-agent mobile robots control system in the problem of persecution with using a modified reinforcement learning method based on neural networks

    Science.gov (United States)

    Patkin, M. L.; Rogachev, G. N.

    2018-02-01

    A method for constructing a multi-agent control system for mobile robots based on training with reinforcement using deep neural networks is considered. Synthesis of the management system is proposed to be carried out with reinforcement training and the modified Actor-Critic method, in which the Actor module is divided into Action Actor and Communication Actor in order to simultaneously manage mobile robots and communicate with partners. Communication is carried out by sending partners at each step a vector of real numbers that are added to the observation vector and affect the behaviour. Functions of Actors and Critic are approximated by deep neural networks. The Critics value function is trained by using the TD-error method and the Actor’s function by using DDPG. The Communication Actor’s neural network is trained through gradients received from partner agents. An environment in which a cooperative multi-agent interaction is present was developed, computer simulation of the application of this method in the control problem of two robots pursuing two goals was carried out.

  18. The Northwest Indiana Robotic Telescope

    Science.gov (United States)

    Slavin, Shawn D.; Rengstorf, A. W.; Aros, J. C.; Segally, W. B.

    2011-01-01

    The Northwest Indiana Robotic (NIRo) Telescope is a remote, automated observing facility recently built by Purdue University Calumet (PUC) at a site in Lowell, IN, approximately 30 miles from the PUC campus. The recently dedicated observatory will be used for broadband and narrowband optical observations by PUC students and faculty, as well as pre-college students through the implementation of standards-based, middle-school modules developed by PUC astronomers and education faculty. The NIRo observatory and its web portal are the central technical elements of a project to improve astronomy education at Purdue Calumet and, more broadly, to improve science education in middle schools of the surrounding region. The NIRo Telescope is a 0.5-meter (20-inch) Ritchey-Chrétien design on a Paramount ME robotic mount, featuring a seven-position filter wheel (UBVRI, Hα, Clear), Peltier (thermoelectrically) cooled CCD camera with 3056 x 3056, square, 12 μm pixels, and off-axis guiding. It provides a coma-free imaging field of 0.5 degrees square, with a plate scale of 0.6 arcseconds per pixel. The observatory has a wireless internet connection, local weather station which publishes data to an internet weather site, and a suite of CCTV security cameras on an IP-based, networked video server. Control of power to every piece of instrumentation is maintained via internet-accessible power distribution units. The telescope can be controlled on-site, or off-site in an attended fashion via an internet connection, but will be used primarily in an unattended mode of automated observation, where queued observations will be scheduled daily from a database of requests. Completed observational data from queued operation will be stored on a campus-based server, which also runs the web portal and observation database. Partial support for this work was provided by the National Science Foundation's Course, Curriculum, and Laboratory Improvement (CCLI) program under Award No. 0736592.

  19. A Scalable Neuro-inspired Robot Controller Integrating a Machine Learning Algorithm and a Spiking Cerebellar-like Network

    DEFF Research Database (Denmark)

    Baira Ojeda, Ismael; Tolu, Silvia; Lund, Henrik Hautop

    2017-01-01

    Combining Fable robot, a modular robot, with a neuroinspired controller, we present the proof of principle of a system that can scale to several neurally controlled compliant modules. The motor control and learning of a robot module are carried out by a Unit Learning Machine (ULM) that embeds...... the Locally Weighted Projection Regression algorithm (LWPR) and a spiking cerebellar-like microcircuit. The LWPR guarantees both an optimized representation of the input space and the learning of the dynamic internal model (IM) of the robot. However, the cerebellar-like sub-circuit integrates LWPR input...

  20. Hand-Eye Calibration and Inverse Kinematics of Robot Arm using Neural Network

    DEFF Research Database (Denmark)

    Wu, Haiyan; Tizzano, Walter; Andersen, Thomas Timm

    2013-01-01

    Traditional technologies for solving hand-eye calibration and inverse kinematics are cumbersome and time consuming due to the high nonlinearity in the models. An alternative to the traditional approaches is the articial neural network inspired by the remarkable abilities of the animals in dierent...

  1. Multiangle Imaging Spectroradiometer (MISR) Global Aerosol Optical Depth Validation Based on 2 Years of Coincident Aerosol Robotic Network (AERONET) Observations

    Science.gov (United States)

    Kahn, Ralph A.; Gaitley, Barbara J.; Martonchik, John V.; Diner, David J.; Crean, Kathleen A.; Holben, Brent

    2005-01-01

    Performance of the Multiangle Imaging Spectroradiometer (MISR) early postlaunch aerosol optical thickness (AOT) retrieval algorithm is assessed quantitatively over land and ocean by comparison with a 2-year measurement record of globally distributed AERONET Sun photometers. There are sufficient coincident observations to stratify the data set by season and expected aerosol type. In addition to reporting uncertainty envelopes, we identify trends and outliers, and investigate their likely causes, with the aim of refining algorithm performance. Overall, about 2/3 of the MISR-retrieved AOT values fall within [0.05 or 20% x AOT] of Aerosol Robotic Network (AERONET). More than a third are within [0.03 or 10% x AOT]. Correlation coefficients are highest for maritime stations (approx.0.9), and lowest for dusty sites (more than approx.0.7). Retrieved spectral slopes closely match Sun photometer values for Biomass burning and continental aerosol types. Detailed comparisons suggest that adding to the algorithm climatology more absorbing spherical particles, more realistic dust analogs, and a richer selection of multimodal aerosol mixtures would reduce the remaining discrepancies for MISR retrievals over land; in addition, refining instrument low-light-level calibration could reduce or eliminate a small but systematic offset in maritime AOT values. On the basis of cases for which current particle models are representative, a second-generation MISR aerosol retrieval algorithm incorporating these improvements could provide AOT accuracy unprecedented for a spaceborne technique.

  2. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  3. Robotic assisted minimally invasive surgery

    Directory of Open Access Journals (Sweden)

    Palep Jaydeep

    2009-01-01

    Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.

  4. Motion camera based on a custom vision sensor and an FPGA architecture

    Science.gov (United States)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  5. Convolutional Neural Network-Based Classification of Driver’s Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors

    Directory of Open Access Journals (Sweden)

    Kwan Woo Lee

    2018-03-01

    Full Text Available Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG or electrocardiogram (ECG sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver’s body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver’s emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN-based method of detecting emotion to identify aggressive driving using input images of the driver’s face, obtained using near-infrared (NIR light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed driving. Our proposed method demonstrates better performance than existing methods.

  6. Innovation in robotic surgery: the Indian scenario.

    Science.gov (United States)

    Deshpande, Suresh V

    2015-01-01

    Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.

  7. Innovation in Robotic Surgery: The Indian Scenario

    Directory of Open Access Journals (Sweden)

    Suresh V Deshpande

    2015-01-01

    Full Text Available Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.

  8. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors.

    Science.gov (United States)

    Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung

    2018-05-10

    The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.

  9. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Muhammad Arsalan

    2018-05-01

    Full Text Available The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet, which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database and mobile iris challenge evaluation (MICHE-I datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.

  10. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  11. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  12. [Laparoscopic colorectal surgery - SILS, robots, and NOTES.

    NARCIS (Netherlands)

    D'Hoore, André; Wolthuis, Albert M.; Mizrahi, Hagar; Parker, Mike; Bemelman, Willem A.; Wara, Pål

    2011-01-01

    Single incision laparoscopic surgery resection of colon is feasible, but so far evidence of benefit compared to standard laparoscopic technique is lacking. In addition to robot-controlled camera, there is only one robot system on the market capable of performing laparoscopic surgery. The da Vinci

  13. Vision-Based Robot Following Using PID Control

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Pati

    2017-06-01

    Full Text Available Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to follow the master robot. For the master robot, a Bluetooth module receives the user’s command from an android application which is processed by the master robot’s controller, which is used to move the robot. The follower robot receives the image from the Kinect sensor mounted on it and recognizes the master robot. The follower robot identifies the x, y positions by employing the camera and the depth by using the Kinect depth sensor. By identifying the x, y, and z locations of the master robot, the follower robot finds the angle and distance between the master and follower robot, which is given as the error term of a PID controller. Using this, the follower robot follows the master robot. A PID controller is based on feedback and tries to minimize the error. Experiments are conducted for two indigenously developed robots; one depicting a humanoid and the other a small mobile robot. It was observed that the follower robot was easily able to follow the master robot using well-tuned PID parameters.

  14. Robotic assisted andrological surgery

    Science.gov (United States)

    Parekattil, Sijo J; Gudeloglu, Ahmet

    2013-01-01

    The introduction of the operative microscope for andrological surgery in the 1970s provided enhanced magnification and accuracy, unparalleled to any previous visual loop or magnification techniques. This technology revolutionized techniques for microsurgery in andrology. Today, we may be on the verge of a second such revolution by the incorporation of robotic assisted platforms for microsurgery in andrology. Robotic assisted microsurgery is being utilized to a greater degree in andrology and a number of other microsurgical fields, such as ophthalmology, hand surgery, plastics and reconstructive surgery. The potential advantages of robotic assisted platforms include elimination of tremor, improved stability, surgeon ergonomics, scalability of motion, multi-input visual interphases with up to three simultaneous visual views, enhanced magnification, and the ability to manipulate three surgical instruments and cameras simultaneously. This review paper begins with the historical development of robotic microsurgery. It then provides an in-depth presentation of the technique and outcomes of common robotic microsurgical andrological procedures, such as vasectomy reversal, subinguinal varicocelectomy, targeted spermatic cord denervation (for chronic orchialgia) and robotic assisted microsurgical testicular sperm extraction (microTESE). PMID:23241637

  15. Robot bicolor system

    Science.gov (United States)

    Yamaba, Kazuo

    1999-03-01

    In case of robot vision, most important problem is the processing speed of acquiring and analyzing images are less than the speed of execution of the robot. In an actual robot color vision system, it is considered that the system should be processed at real time. We guessed this problem might be solved using by the bicolor analysis technique. We have been testing a system which we hope will give us insight to the properties of bicolor vision. The experiment is used the red channel of a color CCD camera and an image from a monochromatic camera to duplicate McCann's theory. To mix the two signals together, the mono image is copied into each of the red, green and blue memory banks of the image processing board and then added the red image to the red bank. On the contrary, pure color images, red, green and blue components are obtained from the original bicolor images in the novel color system after the scaling factor is added to each RGB image. Our search for a bicolor robot vision system was entirely successful.

  16. Global TIE Observatories: Real Time Observational Astronomy Through a Robotic Telescope Network

    Science.gov (United States)

    Clark, G.; Mayo, L. A.

    2001-12-01

    Astronomy in grades K-12 is traditionally taught (if at all) using textbooks and a few simple hands-on activities. Teachers are generally not trained in observational astronomy techniques and are unfamiliar with the most basic astronomical concepts. In addition, most students, by High School graduation, will never have even looked through the eyepiece of a telescope. The problem becomes even more challenging in inner cities, remote rural areas and low socioeconomic communities where educational emphasis on topics in astronomy as well as access to observing facilities is limited or non existent. Access to most optical telescope facilities is limited to monthly observing nights that cater to a small percentage of the general public living near the observatory. Even here, the observing experience is a one-time event detached from the process of scientific enquiry and sustained educational application. Additionally, a number of large, "research grade" observatory facilities are largely unused, partially due to the slow creep of light pollution around the facilities as well as the development of newer, more capable telescopes. Though cutting edge science is often no longer possible at these sights, real research opportunities in astronomy remain numerous for these facilities as educational tools. The possibility now exists to establish a network of research grade telescopes, no longer useful to the professional astronomical community, that can be made accessible through classrooms, after school, and community based programs all across the country through existing IT technologies and applications. These telescopes could provide unparalleled research and educational opportunities for a broad spectrum of students and turns underutilized observatory facilities into valuable, state-of-the-art teaching centers. The NASA sponsored Telescopes In Education project has been wildly successful in engaging the K-12 education community in real-time, hands-on, interactive astronomy

  17. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  18. Evolutionary robotics

    Indian Academy of Sciences (India)

    In evolutionary robotics, a suitable robot control system is developed automatically through evolution due to the interactions between the robot and its environment. It is a complicated task, as the robot and the environment constitute a highly dynamical system. Several methods have been tried by various investigators to ...

  19. Robot Aesthetics

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Putnam, Lance Jonathan

    This paper considers art-based research practice in robotics through a discussion of our course and relevant research projects in autonomous art. The undergraduate course integrates basic concepts of computer science, robotic art, live performance and aesthetic theory. Through practice...... in robotics research (such as aesthetics, culture and perception), we believe robot aesthetics is an important area for research in contemporary aesthetics....

  20. Filigree Robotics

    DEFF Research Database (Denmark)

    Tamke, Martin; Evers, Henrik Leander; Clausen Nørgaard, Esben

    2016-01-01

    Filigree Robotics experiments with the combination of traditional ceramic craft with robotic fabrication in order to generate a new narrative of fine three-dimensional ceramic ornament for architecture.......Filigree Robotics experiments with the combination of traditional ceramic craft with robotic fabrication in order to generate a new narrative of fine three-dimensional ceramic ornament for architecture....

  1. Construction Tele-Robotics System with AR Presentation

    International Nuclear Information System (INIS)

    Ootsubo, K; Kawamura, T; Yamada, H

    2013-01-01

    Tele-Robotics system using bilateral control is an effective tool for task in disaster scenes, and also in extreme environments. The conventional systems are equipped with a few color video cameras captures view of the task field, and their video images are sent to the operator via some network. Usually, the images are captured only from some fixed angles. So the operator cannot obtain intuitively 3D-sense of the task field. In our previous study, we proposed a construction tele-robotics system based on VR presentation. The operator intuits the geometrical states of the robot presented by CG, but the information of the surrounding environment is not included like a video image. So we thought that the task efficiency could be improved by appending the CG image to the video image. In this study, we developed a new presentation system based on augmented reality (AR). In this system, the CG image, which represents 3D geometric information for the task, is overlaid on the video image. In this study, we confirmed the effectiveness of the system experimentally. Additionally, we verified its usefulness to reduction of the communication delay associated with a tele-robotics system.

  2. Integration of Robotic Resources into FORCEnet

    National Research Council Canada - National Science Library

    Nguyen, Chinh; Carroll, Daniel; Nguyen, Hoa

    2006-01-01

    The Networked Intelligence Surveillance, and Reconnaissance (NISR) project integrates robotic resources into Composeable FORCEnet to control and exploit unmanned systems over extremely long distances...

  3. Epidemic Synchronization in Robotic Swarms

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Nielsen, Jens Frederik Dalsgaard; Ngo, Trung Dung

    2009-01-01

    Clock synchronization in swarms of networked mobile robots is studied in a probabilistic, epidemic framework. In this setting communication and synchonization is considered to be a randomized process, taking place at unplanned instants of geographical rendezvous between robots. In combination...... as an infinite-dimensional optimal controlproblem. Illustrative numerical examples are given and commented....

  4. Epidemic Synchronization in Robotic Swarms

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Nielsen, Jens Frederik Dalsgaard; Ngo, Trung Dung

    2009-01-01

    Clock synchronization in swarms of networked mobile robots is studied in a probabilistic, epidemic framework. In this setting communication and synchonization is considered to be a randomized process, taking place at unplanned instants of geographical rendezvous between robots. In combination wit...

  5. Determination of Monthly Aerosol Types in Manila Observatory and Notre Dame of Marbel University from Aerosol Robotic Network (AERONET) measurements.

    Science.gov (United States)

    Ong, H. J. J.; Lagrosas, N.; Uy, S. N.; Gacal, G. F. B.; Dorado, S.; Tobias, V., Jr.; Holben, B. N.

    2016-12-01

    This study aims to identify aerosol types in Manila Observatory (MO) and Notre Dame of Marbel University (NDMU) using Aerosol Robotic Network (AERONET) Level 2.0 inversion data and five dimensional specified clustering and Mahalanobis classification. The parameters used are the 440-870 nm extinction Angström exponent (EAE), 440 nm single scattering albedo (SSA), 440-870 nm absorption Angström exponent (AAE), 440 nm real and imaginary refractive indices. Specified clustering makes use of AERONET data from 7 sites to define 7 aerosol classes: mineral dust (MD), polluted dust (PD), urban industrial (UI), urban industrial developing (UID), biomass burning white smoke (BBW), biomass burning dark smoke (BBD), and marine aerosols. This is similar to the classes used by Russell et al, 2014. A data point is classified into a class based on the closest 5-dimensional Mahalanobis distance (Russell et al, 2014 & Hamill et al, 2016). This method is applied to all 173 MO data points from January 2009 to June 2015 and to all 24 NDMU data points from December 2009 to July 2015 to look at monthly and seasonal variations of aerosol types. The MO and NDMU aerosols are predominantly PD ( 77%) and PD & UID ( 75%) respectively (Figs.1a-b); PD is predominant in the months of February to May in MO and February to March in NDMU. PD results from less strict emission and environmental regulations (Catrall 2005). Average SSA values in MO is comparable to the mean SSA for PD ( 0.89). This can be attributed to presence of high absorbing aerosol types, e.g., carbon which is a product of transportation emissions. The second most dominant aerosol type in MO is UID ( 15%), in NDMU it is BBW ( 25%). In Manila, the high sources of PD and UID (fine particles) is generally from vehicular combustion (Oanh, et al 2006). The detection of BBW in MO from April to May can be attributed to the fires which are common in these dry months. In NDMU, BBW source is from biomass burning (smoldering). In this

  6. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  7. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    Science.gov (United States)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two

  8. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  9. A neural network and IoT-based scheme for performance assessment in Internet of Robotic Things

    OpenAIRE

    Razafimandimby , Cristanel; Loscri , Valeria; Vegni , Anna Maria

    2016-01-01

    International audience; Internet of Robotic Things (IoRT) is a new concept introduced for the first time by ABI Research. Unlike the Internet of Things (IoT), IoRT provides an active sensorization and is considered as the new evolution of IoT. This new concept will bring new opportunities and challenges, while providing new business ideas for IoT and robotics’ entrepreneurs. In this paper, we will focus particularly on two issues: (i) connectivity maintenance among multiple IoRT robots, and (...

  10. What makes a robot 'social'?

    Science.gov (United States)

    Jones, Raya A

    2017-08-01

    Rhetorical moves that construct humanoid robots as social agents disclose tensions at the intersection of science and technology studies (STS) and social robotics. The discourse of robotics often constructs robots that are like us (and therefore unlike dumb artefacts). In the discourse of STS, descriptions of how people assimilate robots into their activities are presented directly or indirectly against the backdrop of actor-network theory, which prompts attributing agency to mundane artefacts. In contradistinction to both social robotics and STS, it is suggested here that to view a capacity to partake in dialogical action (to have a 'voice') is necessary for regarding an artefact as authentically social. The theme is explored partly through a critical reinterpretation of an episode that Morana Alač reported and analysed towards demonstrating her bodies-in-interaction concept. This paper turns to 'body' with particular reference to Gibsonian affordances theory so as to identify the level of analysis at which dialogicality enters social interactions.

  11. Robotic environments

    NARCIS (Netherlands)

    Bier, H.H.

    2011-01-01

    Technological and conceptual advances in fields such as artificial intelligence, robotics, and material science have enabled robotic architectural environments to be implemented and tested in the last decade in virtual and physical prototypes. These prototypes are incorporating sensing-actuating

  12. Healthcare Robotics

    OpenAIRE

    Riek, Laurel D.

    2017-01-01

    Robots have the potential to be a game changer in healthcare: improving health and well-being, filling care gaps, supporting care givers, and aiding health care workers. However, before robots are able to be widely deployed, it is crucial that both the research and industrial communities work together to establish a strong evidence-base for healthcare robotics, and surmount likely adoption barriers. This article presents a broad contextualization of robots in healthcare by identifying key sta...

  13. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  14. Industrial Robots.

    Science.gov (United States)

    Reed, Dean; Harden, Thomas K.

    Robots are mechanical devices that can be programmed to perform some task of manipulation or locomotion under automatic control. This paper discusses: (1) early developments of the robotics industry in the United States; (2) the present structure of the industry; (3) noneconomic factors related to the use of robots; (4) labor considerations…

  15. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    1987-01-01

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. In general, all-solid-state cameras need to be improved in four areas before they can be used as wholesale replacements for tube cameras in exterior security applications: resolution, sensitivity, contrast, and smear. However, with careful design some of the higher performance cameras can be used for perimeter security systems, and all of the cameras have applications where they are uniquely qualified. Many of the cameras are well suited for interior assessment and surveillance uses, and several of the cameras are well designed as robotics and machine vision devices

  16. Hydraulic bilateral construction robot; Yuatsushiki bilateral kensetsu robot

    Energy Technology Data Exchange (ETDEWEB)

    Maehata, K.; Mori, N. [Kayaba Industry Co. Ltd., Tokyo (Japan)

    1999-05-15

    Concerning a hydraulic bilateral construction robot, its system constitution, structures and functions of important components, and the results of some tests are explained, and the researches conducted at Gifu University are described. The construction robot in this report is a servo controlled system of a version developed from the mini-shovel now available in the market. It is equipped, in addition to an electrohydraulic servo control system, with various sensors for detecting the robot attitude, vibration, and load state, and with a camera for visualizing the surrounding landscape. It is also provided with a bilateral joy stick which is a remote control actuator capable of working sensation feedback and with a rocking unit that creates robot movements of rolling, pitching, and heaving. The construction robot discussed here, with output increased and response faster thanks to the employment of a hydraulic driving system for the aim of building a robot system superior in performance to the conventional model designed primarily for heavy duty, proves after tests to be a highly sophisticated remotely controlled robot control system. (NEDO)

  17. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  18. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  19. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar; Heide, Felix; Heidrich, Wolfgang; Wetzstein, Gordon

    2016-01-01

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design

  20. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  1. Robot Mechanisms

    CERN Document Server

    Lenarcic, Jadran; Stanišić, Michael M

    2013-01-01

    This book provides a comprehensive introduction to the area of robot mechanisms, primarily considering industrial manipulators and humanoid arms. The book is intended for both teaching and self-study. Emphasis is given to the fundamentals of kinematic analysis and the design of robot mechanisms. The coverage of topics is untypical. The focus is on robot kinematics. The book creates a balance between theoretical and practical aspects in the development and application of robot mechanisms, and includes the latest achievements and trends in robot science and technology.

  2. An Address Event Representation-Based Processing System for a Biped Robot

    Directory of Open Access Journals (Sweden)

    Uziel Jaramillo-Avila

    2016-02-01

    Full Text Available In recent years, several important advances have been made in the fields of both biologically inspired sensorial processing and locomotion systems, such as Address Event Representation-based cameras (or Dynamic Vision Sensors and in human-like robot locomotion, e.g., the walking of a biped robot. However, making these fields merge properly is not an easy task. In this regard, Neuromorphic Engineering is a fast-growing research field, the main goal of which is the biologically inspired design of hybrid hardware systems in order to mimic neural architectures and to process information in the manner of the brain. However, few robotic applications exist to illustrate them. The main goal of this work is to demonstrate, by creating a closed-loop system using only bio-inspired techniques, how such applications can work properly. We present an algorithm using Spiking Neural Networks (SNN for a biped robot equipped with a Dynamic Vision Sensor, which is designed to follow a line drawn on the floor. This is a commonly used method for demonstrating control techniques. Most of them are fairly simple to implement without very sophisticated components; however, it can still serve as a good test in more elaborate circumstances. In addition, the locomotion system proposed is able to coordinately control the six DOFs of a biped robot in switching between basic forms of movement. The latter has been implemented as a FPGA-based neuromorphic system. Numerical tests and hardware validation are presented.

  3. Robotics research in Chile

    Directory of Open Access Journals (Sweden)

    Javier Ruiz-del-Solar

    2016-12-01

    Full Text Available The development of research in robotics in a developing country is a challenging task. Factors such as low research funds, low trust from local companies and the government, and a small number of qualified researchers hinder the development of strong, local research groups. In this article, and as a case of study, we present our research group in robotics at the Advanced Mining Technology Center of the Universidad de Chile, and the way in which we have addressed these challenges. In 2008, we decided to focus our research efforts in mining, which is the main industry in Chile. We observed that this industry has needs in terms of safety, productivity, operational continuity, and environmental care. All these needs could be addressed with robotics and automation technology. In a first stage, we concentrate ourselves in building capabilities in field robotics, starting with the automation of a commercial vehicle. An important outcome of this project was the earn of the local mining industry confidence. Then, in a second stage started in 2012, we began working with the local mining industry in technological projects. In this article, we describe three of the technological projects that we have developed with industry support: (i an autonomous vehicle for mining environments without global positioning system coverage; (ii the inspection of the irrigation flow in heap leach piles using unmanned aerial vehicles and thermal cameras; and (iii an enhanced vision system for vehicle teleoperation in adverse climatic conditions.

  4. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  5. Robotics education

    International Nuclear Information System (INIS)

    Benton, O.

    1984-01-01

    Robotics education courses are rapidly spreading throughout the nation's colleges and universities. Engineering schools are offering robotics courses as part of their mechanical or manufacturing engineering degree program. Two year colleges are developing an Associate Degree in robotics. In addition to regular courses, colleges are offering seminars in robotics and related fields. These seminars draw excellent participation at costs running up to $200 per day for each participant. The last one drew 275 people from Texas to Virginia. Seminars are also offered by trade associations, private consulting firms, and robot vendors. IBM, for example, has the Robotic Assembly Institute in Boca Raton and charges about $1,000 per week for course. This is basically for owners of IBM robots. Education (and training) can be as short as one day or as long as two years. Here is the educational pattern that is developing now

  6. Predicting workload profiles of brain-robot interface and electromygraphic neurofeedback with cortical resting-state networks: personal trait or task-specific challenge?

    Science.gov (United States)

    Fels, Meike; Bauer, Robert; Gharabaghi, Alireza

    2015-08-01

    Objective. Novel rehabilitation strategies apply robot-assisted exercises and neurofeedback tasks to facilitate intensive motor training. We aimed to disentangle task-specific and subject-related contributions to the perceived workload of these interventions and the related cortical activation patterns. Approach. We assessed the perceived workload with the NASA Task Load Index in twenty-one subjects who were exposed to two different feedback tasks in a cross-over design: (i) brain-robot interface (BRI) with haptic/proprioceptive feedback of sensorimotor oscillations related to motor imagery, and (ii) control of neuromuscular activity with feedback of the electromyography (EMG) of the same hand. We also used electroencephalography to examine the cortical activation patterns beforehand in resting state and during the training session of each task. Main results. The workload profile of BRI feedback differed from EMG feedback and was particularly characterized by the experience of frustration. The frustration level was highly correlated across tasks, suggesting subject-related relevance of this workload component. Those subjects who were specifically challenged by the respective tasks could be detected by an interhemispheric alpha-band network in resting state before the training and by their sensorimotor theta-band activation pattern during the exercise. Significance. Neurophysiological profiles in resting state and during the exercise may provide task-independent workload markers for monitoring and matching participants’ ability and task difficulty of neurofeedback interventions.

  7. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  8. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  9. FPGA for Robotic Applications: from Android/Humanoid Robots to Artificial Men

    Directory of Open Access Journals (Sweden)

    Tole Sutikno

    2011-12-01

    Full Text Available Researches on home robots have been increasing enormously. There has always existed a continuous research effort on problems of anthropomorphic robots which is now called humanoid robots. Currently, robotics has evolved to the point that different branches have reached a remarkable level of maturity, that neural network and fuzzy logic are the main artificial intelligence as intelligent control on the robotics. Despite all this progress, while aiming at accomplishing work-tasks originally charged only to humans, robotic science has perhaps quite naturally turned into the attempt to create artificial men. It is true that artificial men or android humanoid robots open certainly very broad prospects. This “robot” may be viewed as a personal helper, and it will be called a home-robot, or personal robot. This is main reason why the two special sections are issued in the TELKOMNIKA sequentially.

  10. Rugged Walking Robot

    Science.gov (United States)

    Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.

    1990-01-01

    Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.

  11. Embedded mobile farm robot for identification of diseased plants

    Science.gov (United States)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  12. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  13. Robot welding process control

    Science.gov (United States)

    Romine, Peter L.

    1991-01-01

    This final report documents the development and installation of software and hardware for Robotic Welding Process Control. Primary emphasis is on serial communications between the CYRO 750 robotic welder, Heurikon minicomputer running Hunter & Ready VRTX, and an IBM PC/AT, for offline programming and control and closed-loop welding control. The requirements for completion of the implementation of the Rocketdyne weld tracking control are discussed. The procedure for downloading programs from the Intergraph, over the network, is discussed. Conclusions are made on the results of this task, and recommendations are made for efficient implementation of communications, weld process control development, and advanced process control procedures using the Heurikon.

  14. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    NARCIS (Netherlands)

    Simi, M.; Tolou, N.; Valdastri, P.; Herder, J.L.; Menciassi, A.; Dario, P.

    2012-01-01

    A novel compliant Magnetic Levitation System (MLS) for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The

  15. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  16. Robotic buildings(s)

    NARCIS (Netherlands)

    Bier, H.H.

    2014-01-01

    Technological and conceptual advances in fields such as artificial intelligence, robotics, and material science have enabled robotic building to be in the last decade prototypically implemented. In this context, robotic building implies both physically built robotic environments and robotically

  17. Analyzing Cyber-Physical Threats on Robotic Platforms

    OpenAIRE

    Khalil M. Ahmad Yousef; Anas AlMajali; Salah Abu Ghalyon; Waleed Dweik; Bassam J. Mohd

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastat...

  18. Visual Control of Robots Using Range Images

    Directory of Open Access Journals (Sweden)

    Fernando Torres

    2010-08-01

    Full Text Available In the last years, 3D-vision systems based on the time-of-flight (ToF principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  19. Solving the robot-world, hand-eye(s) calibration problem with iterative methods

    Science.gov (United States)

    Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...

  20. Multi-user identification and efficient user approaching by fusing robot and ambient sensors

    NARCIS (Netherlands)

    Hu, N.; Bormann, R.; Zwölfer, T.; Kröse, B.

    2014-01-01

    We describe a novel framework that combines an overhead camera and a robot RGB-D sensor for real-time people finding. Finding people is one of the most fundamental tasks in robot home care scenarios and it consists of many components, e.g. people detection, people tracking, face recognition, robot

  1. Achievement report for fiscal 2000 on operational research of human cooperative and coexisting (humanoid) robot system. Operational research of humanoid robot system; 2000 nendo ningen kyocho kyozongata robot system un'yo kenkyu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    This paper reports the achievements in fiscal 2000 in the operational research of humanoid robot system. Carrying out the development smoothly and efficiently requires accumulation of the operational know-how in both of the periodical check and maintenance and the aspects of hard and software to maintain the functions and performances of the robot platform having been developed in the previous fiscal year. Checks were given on fitting of the fasteners and connectors, batteries, and sensors. Operations were confirmed and adjusted on the liquid crystal projector of the surrounded visual display system for remotely controlled operation, polarization filters, screens, reflector mirrors, and wide viewing angle cameras. Verifications were made on fitting of the arm operation force sensing and presenting system, checks on the mechanical components, and operation of the driving system, whereas no change has been found in the operation for the period of one year, and sufficient performance was identified for the remote robot operation. The virtual robot platform has presented no crash and impediments during erroneous use in the disks of the dynamics simulator and the distributed network processing system. (NEDO)

  2. Robust exponential stabilization of nonholonomic wheeled mobile robots with unknown visual parameters

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The visual servoing stabilization of nonholonomic mobile robot with unknown camera parameters is investigated.A new kind of uncertain chained model of nonholonomic kinemetic system is obtained based on the visual feedback and the standard chained form of type (1,2) mobile robot.Then,a novel time-varying feedback controller is proposed for exponentially stabilizing the position and orientation of the robot using visual feedback and switching strategy when the camera parameters are not known.The exponential s...

  3. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.

    Science.gov (United States)

    Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond

    2018-04-01

    We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.

  4. Intelligent networked teleoperation control

    CERN Document Server

    Li, Zhijun; Su, Chun-Yi

    2015-01-01

    This book describes a unified framework for networked teleoperation systems involving multiple research fields: networked control systems for linear and nonlinear forms, bilateral teleoperation, trilateral teleoperation, multilateral teleoperation and cooperative teleoperation. It closely examines networked control as a field at the intersection of systems & control and robotics and presents a number of experimental case studies on testbeds for robotic systems, including networked haptic devices, robotic network systems and sensor network systems. The concepts and results outlined are easy to understand, even for readers fairly new to the subject. As such, the book offers a valuable reference work for researchers and engineers in the fields of systems & control and robotics.

  5. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  6. Soft Robotics.

    Science.gov (United States)

    Whitesides, George M

    2018-04-09

    This description of "soft robotics" is not intended to be a conventional review, in the sense of a comprehensive technical summary of a developing field. Rather, its objective is to describe soft robotics as a new field-one that offers opportunities to chemists and materials scientists who like to make "things" and to work with macroscopic objects that move and exert force. It will give one (personal) view of what soft actuators and robots are, and how this class of soft devices fits into the more highly developed field of conventional "hard" robotics. It will also suggest how and why soft robotics is more than simply a minor technical "tweak" on hard robotics and propose a unique role for chemistry, and materials science, in this field. Soft robotics is, at its core, intellectually and technologically different from hard robotics, both because it has different objectives and uses and because it relies on the properties of materials to assume many of the roles played by sensors, actuators, and controllers in hard robotics. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  8. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  9. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  10. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  11. A Kinect-Based Gesture Recognition Approach for a Natural Human Robot Interface

    Directory of Open Access Journals (Sweden)

    Grazia Cicirelli

    2015-03-01

    Full Text Available In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN classifiers are trained to recognize the different gestures. This work deals with different challenging tasks, such as the real-time implementation of a gesture recognition system and the temporal resolution of gestures. The HRI interface developed in this work includes three Kinect cameras placed at different locations in an indoor environment and an autonomous mobile robot that can be remotely controlled by one operator standing in front of one of the Kinects. Moreover, the system is supplied with a people re-identification module which guarantees that only one person at a time has control of the robot. The system's performance is first validated offline, and then online experiments are carried out, proving the real-time operation of the system as required by a HRI interface.

  12. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  13. Rapid 3D Modeling and Parts Recognition on Automotive Vehicles Using a Network of RGB-D Sensors for Robot Guidance

    Directory of Open Access Journals (Sweden)

    Alberto Chávez-Aragón

    2013-01-01

    Full Text Available This paper presents an approach for the automatic detection and fast 3D profiling of lateral body panels of vehicles. The work introduces a method to integrate raw streams from depth sensors in the task of 3D profiling and reconstruction and a methodology for the extrinsic calibration of a network of Kinect sensors. This sensing framework is intended for rapidly providing a robot with enough spatial information to interact with automobile panels using various tools. When a vehicle is positioned inside the defined scanning area, a collection of reference parts on the bodywork are automatically recognized from a mosaic of color images collected by a network of Kinect sensors distributed around the vehicle and a global frame of reference is set up. Sections of the depth information on one side of the vehicle are then collected, aligned, and merged into a global RGB-D model. Finally, a 3D triangular mesh modelling the body panels of the vehicle is automatically built. The approach has applications in the intelligent transportation industry, automated vehicle inspection, quality control, automatic car wash systems, automotive production lines, and scan alignment and interpretation.

  14. Optical Flow based Robot Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Kahlouche Souhila

    2008-11-01

    Full Text Available In this paper we try to develop an algorithm for visual obstacle avoidance of autonomous mobile robot. The input of the algorithm is an image sequence grabbed by an embedded camera on the B21r robot in motion. Then, the optical flow information is extracted from the image sequence in order to be used in the navigation algorithm. The optical flow provides very important information about the robot environment, like: the obstacles disposition, the robot heading, the time to collision and the depth. The strategy consists in balancing the amount of left and right side flow to avoid obstacles, this technique allows robot navigation without any collision with obstacles. The robustness of the algorithm will be showed by some examples.

  15. Deployment of Remotely-Accessible Robotics Laboratory

    Directory of Open Access Journals (Sweden)

    Richard Balogh

    2012-03-01

    Full Text Available Robotnacka is an autonomous drawing mobile robot, designed for eaching beginners in the Logo programming language. It can also be used as an experimental platform, in our case in a remotely accessible robotic laboratory with the possibility to control the robots via the Internet. In addition to a basic version of the robot a version equipped with a gripper is available too, one with a wireless camera, and one with additional ultrasonic distance sensors. The laboratory is available on-line permanently and provides a simple way to incorporate robotics in teaching mathematics, programming and other subjects. The laboratory has been in use several years. We provide description of its functionality and summarize our experience.

  16. Autonomous Mobile Robot That Can Read

    Directory of Open Access Journals (Sweden)

    Létourneau Dominic

    2004-01-01

    Full Text Available The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233 MHz and a Sony EVI-D30 pan-tilt-zoom camera.

  17. Collision-free motion coordination of heterogeneous robots

    International Nuclear Information System (INIS)

    Ko, Nak Yong; Seo, Dong Jin; Simmons, Reid G.

    2008-01-01

    This paper proposes a method to coordinate the motion of multiple heterogeneous robots on a network. The proposed method uses prioritization and avoidance. Priority is assigned to each robot; a robot with lower priority avoids the robots of higher priority. To avoid collision with other robots, elastic force and potential field force are used. Also, the method can be applied separately to the motion planning of a part of a robot from that of the other parts of the robot. This is useful for application to the robots of the type mobile manipulator or highly redundant robots. The method is tested by simulation, and it results in smooth and adaptive coordination in an environment with multiple heterogeneous robots

  18. Collision-free motion coordination of heterogeneous robots

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Nak Yong [Chosun University, Gwangju (Korea, Republic of); Seo, Dong Jin [RedOne Technologies, Gwangju (Korea, Republic of); Simmons, Reid G. [Carnegie Mellon University, Pennsylvania (United States)

    2008-11-15

    This paper proposes a method to coordinate the motion of multiple heterogeneous robots on a network. The proposed method uses prioritization and avoidance. Priority is assigned to each robot; a robot with lower priority avoids the robots of higher priority. To avoid collision with other robots, elastic force and potential field force are used. Also, the method can be applied separately to the motion planning of a part of a robot from that of the other parts of the robot. This is useful for application to the robots of the type mobile manipulator or highly redundant robots. The method is tested by simulation, and it results in smooth and adaptive coordination in an environment with multiple heterogeneous robots

  19. Robotics 101

    Science.gov (United States)

    Sultan, Alan

    2011-01-01

    Robots are used in all kinds of industrial settings. They are used to rivet bolts to cars, to move items from one conveyor belt to another, to gather information from other planets, and even to perform some very delicate types of surgery. Anyone who has watched a robot perform its tasks cannot help but be impressed by how it works. This article…

  20. Vitruvian Robot

    DEFF Research Database (Denmark)

    Hasse, Cathrine

    2017-01-01

    future. A real version of Ava would not last long in a human world because she is basically a solipsist, who does not really care about humans. She cannot co-create the line humans walk along. The robots created as ‘perfect women’ (sex robots) today are very far from the ideal image of Ava...

  1. Robot Teachers

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Ess, Charles Melvin; Bhroin, Niamh Ni

    The world's first robot teacher, Saya, was introduced to a classroom in Japan in 2009. Saya, had the appearance of a young female teacher. She could express six basic emotions, take the register and shout orders like 'be quiet' (The Guardian, 2009). Since 2009, humanoid robot technologies have...... developed. It is now suggested that robot teachers may become regular features in educational settings, and may even 'take over' from human teachers in ten to fifteen years (cf. Amundsen, 2017 online; Gohd, 2017 online). Designed to look and act like a particular kind of human; robot teachers mediate human...... existence and roles, while also aiming to support education through sophisticated, automated, human-like interaction. Our paper explores the design and existential implications of ARTIE, a robot teacher at Oxford Brookes University (2017, online). Drawing on an initial empirical exploration we propose...

  2. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  3. Social Robots

    DEFF Research Database (Denmark)

    Social robotics is a cutting edge research area gathering researchers and stakeholders from various disciplines and organizations. The transformational potential that these machines, in the form of, for example, caregiving, entertainment or partner robots, pose to our societies and to us as indiv......Social robotics is a cutting edge research area gathering researchers and stakeholders from various disciplines and organizations. The transformational potential that these machines, in the form of, for example, caregiving, entertainment or partner robots, pose to our societies and to us...... as individuals seems to be limited by our technical limitations and phantasy alone. This collection contributes to the field of social robotics by exploring its boundaries from a philosophically informed standpoint. It constructively outlines central potentials and challenges and thereby also provides a stable...

  4. Robotic seeding

    DEFF Research Database (Denmark)

    Pedersen, Søren Marcus; Fountas, Spyros; Sørensen, Claus Aage Grøn

    2017-01-01

    Agricultural robotics has received attention for approximately 20 years, but today there are only a few examples of the application of robots in agricultural practice. The lack of uptake may be (at least partly) because in many cases there is either no compelling economic benefit......, or there is a benefit but it is not recognized. The aim of this chapter is to quantify the economic benefits from the application of agricultural robots under a specific condition where such a benefit is assumed to exist, namely the case of early seeding and re-seeding in sugar beet. With some predefined assumptions...... with regard to speed, capacity and seed mapping, we found that among these two technical systems both early seeding with a small robot and re-seeding using a robot for a smaller part of the field appear to be financially viable solutions in sugar beet production....

  5. Sambot II: A self-assembly modular swarm robot

    Science.gov (United States)

    Zhang, Yuchao; Wei, Hongxing; Yang, Bo; Jiang, Cancan

    2018-04-01

    The new generation of self-assembly modular swarm robot Sambot II, based on the original generation of self-assembly modular swarm robot Sambot, adopting laser and camera module for information collecting, is introduced in this manuscript. The visual control algorithm of Sambot II is detailed and feasibility of the algorithm is verified by the laser and camera experiments. At the end of this manuscript, autonomous docking experiments of two Sambot II robots are presented. The results of experiments are showed and analyzed to verify the feasibility of whole scheme of Sambot II.

  6. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  7. Micro intelligence robot

    International Nuclear Information System (INIS)

    Jeon, Yon Ho

    1991-07-01

    This book gives descriptions of micro robot about conception of robots and micro robot, match rules of conference of micro robots, search methods of mazes, and future and prospect of robots. It also explains making and design of 8 beat robot like making technique, software, sensor board circuit, and stepping motor catalog, speedy 3, Mr. Black and Mr. White, making and design of 16 beat robot, such as micro robot artist, Jerry 2 and magic art of shortening distances algorithm of robot simulation.

  8. An Intelligent Robot Programing

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Seong Yong

    2012-01-15

    This book introduces an intelligent robot programing with background of the begging, introduction of VPL, and SPL, building of environment for robot platform, starting of robot programing, design of simulation environment, robot autonomy drive control programing, simulation graphic. Such as SPL graphic programing graphical image and graphical shapes, and graphical method application, application of procedure for robot control, robot multiprogramming, robot bumper sensor programing, robot LRF sencor programing and robot color sensor programing.

  9. An Intelligent Robot Programing

    International Nuclear Information System (INIS)

    Hong, Seong Yong

    2012-01-01

    This book introduces an intelligent robot programing with background of the begging, introduction of VPL, and SPL, building of environment for robot platform, starting of robot programing, design of simulation environment, robot autonomy drive control programing, simulation graphic. Such as SPL graphic programing graphical image and graphical shapes, and graphical method application, application of procedure for robot control, robot multiprogramming, robot bumper sensor programing, robot LRF sencor programing and robot color sensor programing.

  10. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  11. Visual Servoing of Mobile Microrobot with Centralized Camera

    Directory of Open Access Journals (Sweden)

    Kiswanto Gandjar

    2018-01-01

    Full Text Available In this paper, a mechanism of visual servoing for mobile microrobot with a centralized camera is developed. Especially for the development of swarm AI applications. In the fields of microrobots the size of robots is minimal and the amount of movement is also small. By replacing various sensors that is needed with a single centralized vision sensor we can eliminate a lot of components and the need for calibration on every robot. A study and design for a visual servoing mobile microrobot has been developed. This system can use multi object tracking and hough transform to identify the positions of the robots. And can control multiple robots at once with an accuracy of 5-6 pixel from the desired target.

  12. Message Encryption in Robot Operating System: Collateral Effects of Hardening Mobile Robots

    Directory of Open Access Journals (Sweden)

    Francisco J. Rodríguez-Lera

    2018-03-01

    Full Text Available In human–robot interaction situations, robot sensors collect huge amounts of data from the environment in order to characterize the situation. Some of the gathered data ought to be treated as private, such as medical data (i.e., medication guidelines, personal, and safety information (i.e., images of children, home habits, alarm codes, etc.. However, most robotic software development frameworks are not designed for securely managing this information. This paper analyzes the scenario of hardening one of the most widely used robotic middlewares, Robot Operating System (ROS. The study investigates a robot’s performance when ciphering the messages interchanged between ROS nodes under the publish/subscribe paradigm. In particular, this research focuses on the nodes that manage cameras and LIDAR sensors, which are two of the most extended sensing solutions in mobile robotics, and analyzes the collateral effects on the robot’s achievement under different computing capabilities and encryption algorithms (3DES, AES, and Blowfish to robot performance. The findings present empirical evidence that simple encryption algorithms are lightweight enough to provide cyber-security even in low-powered robots when carefully designed and implemented. Nevertheless, these techniques come with a number of serious drawbacks regarding robot autonomy and performance if they are applied randomly. To avoid these issues, we define a taxonomy that links the type of ROS message, computational units, and the encryption methods. As a result, we present a model to select the optimal options for hardening a mobile robot using ROS.

  13. Automatic learning rate adjustment for self-supervising autonomous robot control

    Science.gov (United States)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.

  14. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  15. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Mazen Ghandour

    2017-06-01

    Full Text Available In this paper, a novel approach for collision avoidance for indoor mobile robots based on human-robot interaction is realized. The main contribution of this work is a new technique for collision avoidance by engaging the human and the robot in generating new collision-free paths. In mobile robotics, collision avoidance is critical for the success of the robots in implementing their tasks, especially when the robots navigate in crowded and dynamic environments, which include humans. Traditional collision avoidance methods deal with the human as a dynamic obstacle, without taking into consideration that the human will also try to avoid the robot, and this causes the people and the robot to get confused, especially in crowded social places such as restaurants, hospitals, and laboratories. To avoid such scenarios, a reactive-supervised collision avoidance system for mobile robots based on human-robot interaction is implemented. In this method, both the robot and the human will collaborate in generating the collision avoidance via interaction. The person will notify the robot about the avoidance direction via interaction, and the robot will search for the optimal collision-free path on the selected direction. In case that no people interacted with the robot, it will select the navigation path autonomously and select the path that is closest to the goal location. The humans will interact with the robot using gesture recognition and Kinect sensor. To build the gesture recognition system, two models were used to classify these gestures, the first model is Back-Propagation Neural Network (BPNN, and the second model is Support Vector Machine (SVM. Furthermore, a novel collision avoidance system for avoiding the obstacles is implemented and integrated with the HRI system. The system is tested on H20 robot from DrRobot Company (Canada and a set of experiments were implemented to report the performance of the system in interacting with the human and avoiding

  16. High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera Tracking

    Science.gov (United States)

    Liss, J.; Dunagan, S. E.; Johnson, R. R.; Chang, C. S.; LeBlanc, S. E.; Shinozuka, Y.; Redemann, J.; Flynn, C. J.; Segal-Rosenhaimer, M.; Pistone, K.; Kacenelenbogen, M. S.; Fahey, L.

    2016-12-01

    High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera TrackingThe NASA Ames Sun-photometer-Satellite Group, DOE, PNNL Atmospheric Sciences and Global Change Division, and NASA Goddard's AERONET (AErosol RObotic NETwork) team recently collaborated on the development of a new airborne sunphotometry instrument that provides information on gases and aerosols extending far beyond what can be derived from discrete-channel direct-beam measurements, while preserving or enhancing many of the desirable AATS features (e.g., compactness, versatility, automation, reliability). The enhanced instrument combines the sun-tracking ability of the current 14-Channel NASA Ames AATS-14 with the sky-scanning ability of the ground-based AERONET Sun/sky photometers, while extending both AATS-14 and AERONET capabilities by providing full spectral information from the UV (350 nm) to the SWIR (1,700 nm). Strengths of this measurement approach include many more wavelengths (isolated from gas absorption features) that may be used to characterize aerosols and detailed (oversampled) measurements of the absorption features of specific gas constituents. The Sky Scanning Sun Tracking Airborne Radiometer (3STAR) replicates the radiometer functionality of the AATS-14 instrument but incorporates modern COTS technologies for all instruments subsystems. A 19-channel radiometer bundle design is borrowed from a commercial water column radiance instrument manufactured by Biospherical Instruments of San Diego California (ref, Morrow and Hooker)) and developed using NASA funds under the Small Business Innovative Research (SBIR) program. The 3STAR design also incorporates the latest in robotic motor technology embodied in Rotary actuators from Oriental motor Corp. having better than 15 arc seconds of positioning accuracy. Control system was designed, tested and simulated using a Hybrid-Dynamical modeling methodology. The design also replaces the classic quadrant detector tracking sensor with a

  17. Educational Robotics: Open Questions and New Challenges

    Science.gov (United States)

    Alimisis, Dimitris

    2013-01-01

    This paper investigates the current situation in the field of educational robotics and identifies new challenges and trends focusing on the use of robotic technologies as a tool that will support creativity and other 21st-century learning skills. Finally, conclusions and proposals are presented for promoting cooperation and networking of…

  18. Long distance synchronization of mobile robots

    NARCIS (Netherlands)

    Alvarez Aguirre, A.; Nijmeijer, H.; Oguchi, T.

    2010-01-01

    This paper considers the long distance master-slave and mutual synchronization of unicycle-type mobile robots. The issues that arise when the elements of a robotic network are placed in different locations are addressed, specifically the time-delay induced by the communication channel linking the

  19. Robot Formations Using Only Local Sensing and Control

    DEFF Research Database (Denmark)

    Fredslund, Jakob; Matarić, Maja J

    2001-01-01

    , behaviorbased algorithm that solves the problem for N robots each equipped with sonar, laser, camera, and a radio link for communicating with other robots. The method uses the idea of keeping a single friend at a desired angle (by panning the camera and keeping the friend centered in the image), and only......We study the problem of achieving global behavior in a group of robots using only local sensing and interaction, in the context of formations, where the goal is to have N mobile robots establish and maintain some predetermined geometric shape. We have devised a simple, general, robust, localized...... communicating heartbeat messages. We also developed a general analytical method for evaluating formations and applied it to our algorithm. We validate our algorithm both in simulation and with physical robots....

  20. Robotic hip arthroscopy in human anatomy.

    Science.gov (United States)

    Kather, Jens; Hagen, Monika E; Morel, Philippe; Fasel, Jean; Markar, Sheraz; Schueler, Michael

    2010-09-01

    Robotic technology offers technical advantages that might offer new solutions for hip arthroscopy. Two hip arthroscopies were performed in human cadavers using the da Vinci surgical system. During both surgeries, a robotic camera and 5 or 8 mm da Vinci trocars with instruments were inserted into the hip joint for manipulation. Introduction of cameras and working instruments, docking of the robotic system and instrument manipulation was successful in both cases. The long articulating area of 5 mm instruments limited movements inside the joint; an 8 mm instrument with a shorter area of articulation offered an improved range of motion. Hip arthroscopy using the da Vinci standard system appears a feasible alternative to standard arthroscopy. Instruments and method of application must be modified and improved before routine clinical application but further research in this area seems justified, considering the clinical value of such an approach. Copyright 2010 John Wiley & Sons, Ltd.

  1. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  2. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  3. Space Robotics Challenge

    Data.gov (United States)

    National Aeronautics and Space Administration — The Space Robotics Challenge seeks to infuse robot autonomy from the best and brightest research groups in the robotics community into NASA robots for future...

  4. Robotic arm

    International Nuclear Information System (INIS)

    Kwech, H.

    1989-01-01

    A robotic arm positionable within a nuclear vessel by access through a small diameter opening and having a mounting tube supported within the vessel and mounting a plurality of arm sections for movement lengthwise of the mounting tube as well as for movement out of a window provided in the wall of the mounting tube is disclosed. An end effector, such as a grinding head or welding element, at an operating end of the robotic arm, can be located and operated within the nuclear vessel through movement derived from six different axes of motion provided by mounting and drive connections between arm sections of the robotic arm. The movements are achieved by operation of remotely-controllable servo motors, all of which are mounted at a control end of the robotic arm to be outside the nuclear vessel. 23 figs

  5. Robotic surgery

    Science.gov (United States)

    ... with this type of surgery give it some advantages over standard endoscopic techniques. The surgeon can make ... Elsevier Saunders; 2015:chap 87. Muller CL, Fried GM. Emerging technology in surgery: Informatics, electronics, robotics. In: ...

  6. Robotic parathyroidectomy.

    Science.gov (United States)

    Okoh, Alexis Kofi; Sound, Sara; Berber, Eren

    2015-09-01

    Robotic parathyroidectomy has recently been described. Although the procedure eliminates the neck scar, it is technically more demanding than the conventional approaches. This report is a review of the patients' selection criteria, technique, and outcomes. © 2015 Wiley Periodicals, Inc.

  7. Light Robotics

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin

    Light Robotics - Structure-Mediated Nanobiophotonics covers the latest means of sculpting of both light and matter for achieving bioprobing and manipulation at the smallest scales. The synergy between photonics, nanotechnology and biotechnology spans the rapidly growing field of nanobiophotonics...

  8. Robotic arm

    Science.gov (United States)

    Kwech, Horst

    1989-04-18

    A robotic arm positionable within a nuclear vessel by access through a small diameter opening and having a mounting tube supported within the vessel and mounting a plurality of arm sections for movement lengthwise of the mounting tube as well as for movement out of a window provided in the wall of the mounting tube. An end effector, such as a grinding head or welding element, at an operating end of the robotic arm, can be located and operated within the nuclear vessel through movement derived from six different axes of motion provided by mounting and drive connections between arm sections of the robotic arm. The movements are achieved by operation of remotely-controllable servo motors, all of which are mounted at a control end of the robotic arm to be outside the nuclear vessel.

  9. Navigation control of a multi-functional eye robot

    International Nuclear Information System (INIS)

    Ali, F.A.M.; Hashmi, B.; Younas, A.; Abid, B.

    2016-01-01

    The advancement in robotic field is enhanced rigorously in the past Few decades. Robots are being used in different fields of science as well as warfare. The research shows that in the near future, robots would be able to serve in fighting wars. Different countries and their armies have already deployed several military robots. However, there exist some drawbacks of robots like their inefficiency and inability to work under abnormal conditions. Ascent of artificial intelligence may resolve this issue in the coming future. The main focus of this paper is to provide a low cost and long range most efficient mechanical as well as software design of an Eye Robot. Using a blend of robotics and image processing with an addition of artificial intelligence path navigation techniques, this project is designed and implemented by controlling the robot (including robotic arm and camera) through a 2.4 GHz RF module manually. Autonomous function of the robot includes navigation based on the path assigned to the robot. The path is drawn on a VB based application and then transferred to the robot wirelessly or through serial port. A Wi-Fi based Optical Character Recognition (OCR) implemented video streaming can also be observed at remote devices like laptops. (author)

  10. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  11. Comparison of Coincident Multiangle Imaging Spectroradiometer and Moderate Resolution Imaging Spectroradiometer Aerosol Optical Depths over Land and Ocean Scenes Containing Aerosol Robotic Network Sites

    Science.gov (United States)

    Abdou, Wedad A.; Diner, David J.; Martonchik, John V.; Bruegge, Carol J.; Kahn, Ralph A.; Gaitley, Barbara J.; Crean, Kathleen A.; Remer, Lorraine A.; Holben, Brent

    2005-01-01

    The Multiangle Imaging Spectroradiometer (MISR) and the Moderate Resolution Imaging Spectroradiometer (MODIS), launched on 18 December 1999 aboard the Terra spacecraft, are making global observations of top-of-atmosphere (TOA) radiances. Aerosol optical depths and particle properties are independently retrieved from these radiances using methodologies and algorithms that make use of the instruments corresponding designs. This paper compares instantaneous optical depths retrieved from simultaneous and collocated radiances measured by the two instruments at locations containing sites within the Aerosol Robotic Network (AERONET). A set of 318 MISR and MODIS images, obtained during the months of March, June, and September 2002 at 62 AERONET sites, were used in this study. The results show that over land, MODIS aerosol optical depths at 470 and 660 nm are larger than those retrieved from MISR by about 35% and 10% on average, respectively, when all land surface types are included in the regression. The differences decrease when coastal and desert areas are excluded. For optical depths retrieved over ocean, MISR is on average about 0.1 and 0.05 higher than MODIS in the 470 and 660 nm bands, respectively. Part of this difference is due to radiometric calibration and is reduced to about 0.01 and 0.03 when recently derived band-to-band adjustments in the MISR radiometry are incorporated. Comparisons with AERONET data show similar patterns.

  12. Serendipitous Offline Learning in a Neuromorphic Robot

    Directory of Open Access Journals (Sweden)

    Terrence C Stewart

    2016-02-01

    Full Text Available We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviours. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviours. All sensor data is provided via a spike-based silicon retina camera (eDVS, and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker. Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where he robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behaviour.

  13. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  14. Robotic and Survey Telescopes

    Science.gov (United States)

    Woźniak, Przemysław

    Robotic telescopes are revolutionizing the way astronomers collect their dataand conduct sky surveys. This chapter begins with a discussion of principles thatguide the process of designing, constructing, and operating telescopes andobservatories that offer a varying degree of automation, from instruments remotelycontrolled by observers to fully autonomous systems requiring no humansupervision during their normal operations. Emphasis is placed on designtrade-offs involved in building end-to-end systems intended for a wide range ofscience applications. The second part of the chapter contains descriptions ofseveral projects and instruments, both existing and currently under development.It is an attempt to provide a representative selection of actual systems thatillustrates state of the art in technology, as well as important ideas and milestonesin the development of the field. The list of presented instruments spans the fullrange in size starting from small all-sky monitors, through midrange robotic andsurvey telescopes, and finishing with large robotic instruments and surveys.Explosive growth of telescope networking is enabling entirely new modesof interaction between the survey and follow-up observing. Increasingimportance of standardized communication protocols and software is stressed.These developments are driven by the fusion of robotic telescope hardware,massive storage and databases, real-time knowledge extraction, and datacross-correlation on a global scale. The chapter concludes with examplesof major science results enabled by these new technologies and futureprospects.

  15. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  16. Towards Coordination and Control of Multi-robot Systems

    DEFF Research Database (Denmark)

    Quottrup, Michael Melholt

    This thesis focuses on control and coordination of mobile multi-robot systems (MRS). MRS can often deal with tasks that are difficult to be accomplished by a single robot. One of the challenges is the need to control, coordinate and synchronize the operation of several robots to perform some...... specified task. This calls for new strategies and methods which allow the desired system behavior to be specified in a formal and succinct way. Two different frameworks for the coordination and control of MRS have been investigated. Framework I - A network of robots is modeled as a network of multi...... a requirement specification in Computational Tree Logic (CTL) for a network of robots. The result is a set of motion plans for the robots which satisfy the specification. Framework II - A framework for controller synthesis for a single robot with respect to requirement specification in Linear-time Temporal...

  17. Recent advances in robotics

    International Nuclear Information System (INIS)

    Beni, G.; Hackwood, S.

    1984-01-01

    Featuring 10 contributions, this volume offers a state-of-the-art report on robotic science and technology. It covers robots in modern industry, robotic control to help the disabled, kinematics and dynamics, six-legged walking robots, a vector analysis of robot manipulators, tactile sensing in robots, and more

  18. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  19. A Green Robotic Observatory for Astronomy Education

    Science.gov (United States)

    Reddy, Vishnu; Archer, K.

    2008-09-01

    With the development of robotic telescopes and stable remote observing software, it is currently possible for a small institution to have an affordable astronomical facility for astronomy education. However, a faculty member has to deal with the light pollution (observatory location on campus), its nightly operations and regular maintenance apart from his day time teaching and research responsibilities. While building an observatory at a remote location is a solution, the cost of constructing and operating such a facility, not to mention the environmental impact, are beyond the reach of most institutions. In an effort to resolve these issues we have developed a robotic remote observatory that can be operated via the internet from anywhere in the world, has a zero operating carbon footprint and minimum impact on the local environment. The prototype observatory is a clam-shell design that houses an 8-inch telescope with a SBIG ST-10 CCD detector. The brain of the observatory is a low draw 12-volt harsh duty computer that runs the dome, telescope, CCD camera, focuser, and weather monitoring. All equipment runs of a 12-volt AGM-style battery that has low lead content and hence more environmental-friendly to dispose. The total power of 12-14 amp/hrs is generated from a set of solar panels that are large enough to maintain a full battery charge for several cloudy days. This completely eliminates the need for a local power grid for operations. Internet access is accomplished via a high-speed cell phone broadband connection or satellite link eliminating the need for a phone network. An independent observatory monitoring system interfaces with the observatory computer during operation. The observatory converts to a trailer for transportation to the site and is converted to a semi-permanent building without wheels and towing equipment. This ensures minimal disturbance to local environment.

  20. Flocking and rendezvous in distributed robotics

    CERN Document Server

    Francis, Bruce A

    2016-01-01

    This brief describes the coordinated control of groups of robots using only sensory input – and no direct external commands. Furthermore, each robot employs the same local strategy, i.e., there are no leaders, and the text also deals with decentralized control, allowing for cases in which no single robot can sense all the others. One can get intuition for the problem from the natural world, for example, flocking birds. How do they achieve and maintain their flying formation? Recognizing their importance as the most basic coordination tasks for mobile robot networks, the brief details flocking and rendezvous. They are shown to be physical illustrations of emergent behaviors with global consensus arising from local interactions. The authors extend the consideration of these fundamental ideas to describe their operation in flying robots and prompt readers to pursue further research in the field.  Flocking and Rendezvous in Distributed Robotics will provide graduate students a firm grounding in the subject, w...

  1. An automated miniature robotic vehicle inspection system

    Energy Technology Data Exchange (ETDEWEB)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter [Centre for Ultrasonic Engineering, University of Strathclyde, 204 George Street, Glasgow, G1 1XW (United Kingdom)

    2014-02-18

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.

  2. An automated miniature robotic vehicle inspection system

    International Nuclear Information System (INIS)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter

    2014-01-01

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software

  3. Robot Task Commander with Extensible Programming Environment

    Science.gov (United States)

    Hart, Stephen W (Inventor); Yamokoski, John D. (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Gooding, Dustin R (Inventor)

    2014-01-01

    A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.

  4. Soft Robotics Week

    CERN Document Server

    Rossiter, Jonathan; Iida, Fumiya; Cianchetti, Matteo; Margheri, Laura

    2017-01-01

    This book offers a comprehensive, timely snapshot of current research, technologies and applications of soft robotics. The different chapters, written by international experts across multiple fields of soft robotics, cover innovative systems and technologies for soft robot legged locomotion, soft robot manipulation, underwater soft robotics, biomimetic soft robotic platforms, plant-inspired soft robots, flying soft robots, soft robotics in surgery, as well as methods for their modeling and control. Based on the results of the second edition of the Soft Robotics Week, held on April 25 – 30, 2016, in Livorno, Italy, the book reports on the major research lines and novel technologies presented and discussed during the event.

  5. Mechanical Design Of Prototype Exoskeleton Robotic System For Human Leg Movements And Implementation Of Gait Data With Neural Network

    Directory of Open Access Journals (Sweden)

    Evren Meltem Toygar

    2012-06-01

    Full Text Available Target of this study is designing a exoskeleton system for single lower extremity disabled person and controlling this exoskeleton system with neural network. Exoskeleton system is modeled by using SolidWorks. At the same time, gait data is acquired on human body and sole is divided four parts after that reaction forces are gauged during the walking. Distributions of strain and deformation are obtained by using experimental gait data. The walking is designed using the obtained data and walking data is derived for control stage. Power requirements of actuators are defined.

  6. Rehabilitation robotics.

    Science.gov (United States)

    Krebs, H I; Volpe, B T

    2013-01-01

    This chapter focuses on rehabilitation robotics which can be used to augment the clinician's toolbox in order to deliver meaningful restorative therapy for an aging population, as well as on advances in orthotics to augment an individual's functional abilities beyond neurorestoration potential. The interest in rehabilitation robotics and orthotics is increasing steadily with marked growth in the last 10 years. This growth is understandable in view of the increased demand for caregivers and rehabilitation services escalating apace with the graying of the population. We provide an overview on improving function in people with a weak limb due to a neurological disorder who cannot properly control it to interact with the environment (orthotics); we then focus on tools to assist the clinician in promoting rehabilitation of an individual so that s/he can interact with the environment unassisted (rehabilitation robotics). We present a few clinical results occurring immediately poststroke as well as during the chronic phase that demonstrate superior gains for the upper extremity when employing rehabilitation robotics instead of usual care. These include the landmark VA-ROBOTICS multisite, randomized clinical study which demonstrates clinical gains for chronic stroke that go beyond usual care at no additional cost. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Medical robotics.

    Science.gov (United States)

    Ferrigno, Giancarlo; Baroni, Guido; Casolo, Federico; De Momi, Elena; Gini, Giuseppina; Matteucci, Matteo; Pedrocchi, Alessandra

    2011-01-01

    Information and communication technology (ICT) and mechatronics play a basic role in medical robotics and computer-aided therapy. In the last three decades, in fact, ICT technology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as surgical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the recovery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algorithms(e.g., machine learning and other cognitive approaches)has largely increased the field of applications of these technologies,making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.

  8. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  9. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  10. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  11. Intelligent manipulation technique for multi-branch robotic systems

    Science.gov (United States)

    Chen, Alexander Y. K.; Chen, Eugene Y. S.

    1990-01-01

    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.

  12. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  13. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  14. Generic robot architecture

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  15. 'Filigree Robotics'

    DEFF Research Database (Denmark)

    2016-01-01

    -scale 3D printed ceramics accompanied by prints, videos and ceramic probes, which introduce the material and design processes of the project.'Filigree Robotics' experiments with a combination of the traditional ceramic technique of ‘Overforming’ with 3d Laserscan and Robotic extrusion technique...... application of reflectivity after an initial 3d print. The consideration and integration of this material practice into a digital workflow took place in an interdisciplinary collaboration of Ceramicist Flemming Tvede Hansen from KADK Superformlab and architectural researchers from CITA (Martin Tamke, Henrik...... to the creation of the form and invites for experimentation. In Filigree Robotics we combine the crafting of the mold with a parallel running generative algorithm, which is fed by a constant laserscan of the 3d surface. This algorithm, analyses the topology of the mold, identifies high and low points and uses...

  16. Terpsichore. ENEA's autonomous robotics project; Progetto Tersycore, la robotica autonoma

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, S; Zanela, S; Santini, A; Nanni, V [ENEA, Centro Ricerche Casaccia, Rome (Italy). Div. Robotica e Informatica Avanzata

    1999-10-01

    The article presents some of the Terpsichore project's results aimed to developed and test algorithms and applications for autonomous robotics. Four applications are described: dynamic mapping of a building's interior through the use of ultrasonic sensors; visual drive of an autonomous robot via a neural network controller; a neural network-based stereo vision system that steers a robot through unknown indoor environments; and the evolution of intelligent behaviours via the genetic algorithm approach.

  17. Real-Time Augmented Reality for Robotic-Assisted Surgery

    DEFF Research Database (Denmark)

    Jørgensen, Martin Kibsgaard; Kraus, Martin

    2015-01-01

    Training in robotic-assisted minimally invasive surgery is crucial, but the training with actual surgery robots is relatively expensive. Therefore, improving the efficiency of this training is of great interest in robotic surgical education. One of the current limitations of this training is the ......-dimensional computer graphics in real time. Our system makes it possible to easily deploy new user interfaces for robotic-assisted surgery training. The system has been positively evaluated by two experienced instructors in robot-assisted surgery....... is the limited visual communication between the instructor and the trainee. As the trainee's view is limited to that of the surgery robot's camera, even a simple task such as pointing is difficult. We present a compact system to overlay the video streams of the da Vinci surgery systems with interactive three...

  18. Monte Carlo Registration and Its Application with Autonomous Robots

    Directory of Open Access Journals (Sweden)

    Christian Rink

    2016-01-01

    Full Text Available This work focuses on Monte Carlo registration methods and their application with autonomous robots. A streaming and an offline variant are developed, both based on a particle filter. The streaming registration is performed in real-time during data acquisition with a laser striper allowing for on-the-fly pose estimation. Thus, the acquired data can be instantly utilized, for example, for object modeling or robot manipulation, and the laser scan can be aborted after convergence. Curvature features are calculated online and the estimated poses are optimized in the particle weighting step. For sampling the pose particles, uniform, normal, and Bingham distributions are compared. The methods are evaluated with a high-precision laser striper attached to an industrial robot and with a noisy Time-of-Flight camera attached to service robots. The shown applications range from robot assisted teleoperation, over autonomous object modeling, to mobile robot localization.

  19. Robotic surgery in gynecology

    Directory of Open Access Journals (Sweden)

    Jean eBouquet De Jolinière

    2016-05-01

    Full Text Available Abstract Minimally invasive surgery (MIS can be considered as the greatest surgical innovation over the past thirty years. It revolutionized surgical practice with well-proven advantages over traditional open surgery: reduced surgical trauma and incision-related complications, such as surgical-site infections, postoperative pain and hernia, reduced hospital stay, and improved cosmetic outcome. Nonetheless, proficiency in MIS can be technically challenging as conventional laparoscopy is associated with several limitations as the two-dimensional (2D monitor reduction in-depth perception, camera instability, limited range of motion and steep learning curves. The surgeon has a low force feedback which allows simple gestures, respect for tissues and more effective treatment of complications.Since 1980s several computer sciences and robotics projects have been set up to overcome the difficulties encountered with conventional laparoscopy, to augment the surgeon's skills, achieve accuracy and high precision during complex surgery and facilitate widespread of MIS. Surgical instruments are guided by haptic interfaces that replicate and filter hand movements. Robotically assisted technology offers advantages that include improved three- dimensional stereoscopic vision, wristed instruments that improve dexterity, and tremor canceling software that improves surgical precision.

  20. The Malaysian Robotic Solar Observatory (P29)

    Science.gov (United States)

    Othman, M.; Asillam, M. F.; Ismail, M. K. H.

    2006-11-01

    Robotic observatory with small telescopes can make significant contributions to astronomy observation. They provide an encouraging environment for astronomers to focus on data analysis and research while at the same time reducing time and cost for observation. The observatory will house the primary 50cm robotic telescope in the main dome which will be used for photometry, spectroscopy and astrometry observation activities. The secondary telescope is a robotic multi-apochromatic refractor (maximum diameter: 15 cm) which will be housed in the smaller dome. This telescope set will be used for solar observation mainly in three different wavelengths simultaneously: the Continuum, H-Alpha and Calcium K-line. The observatory is also equipped with an automated weather station, cloud & rain sensor and all-sky camera to monitor the climatic condition, sense the clouds (before raining) as well as to view real time sky view above the observatory. In conjunction with the Langkawi All-Sky Camera, the observatory website will also display images from the Malaysia - Antarctica All-Sky Camera used to monitor the sky at Scott Base Antarctica. Both all-sky images can be displayed simultaneously to show the difference between the equatorial and Antarctica skies. This paper will describe the Malaysian Robotic Observatory including the systems available and method of access by other astronomers. We will also suggest possible collaboration with other observatories in this region.

  1. A Combination of Machine Learning and Cerebellar-like Neural Networks for the Motor Control and Motor Learning of the Fable Modular Robot

    DEFF Research Database (Denmark)

    Baira Ojeda, Ismael; Tolu, Silvia; Pacheco, Moises

    2017-01-01

    We scaled up a bio-inspired control architecture for the motor control and motor learning of a real modular robot. In our approach, the Locally Weighted Projection Regression algorithm (LWPR) and a cerebellar microcircuit coexist, in the form of a Unit Learning Machine. The LWPR algorithm optimizes...... the input space and learns the internal model of a single robot module to command the robot to follow a desired trajectory with its end-effector. The cerebellar-like microcircuit refines the LWPR output delivering corrective commands. We contrasted distinct cerebellar-like circuits including analytical...

  2. Medical robotics

    CERN Document Server

    Troccaz, Jocelyne

    2013-01-01

    In this book, we present medical robotics, its evolution over the last 30 years in terms of architecture, design and control, and the main scientific and clinical contributions to the field. For more than two decades, robots have been part of hospitals and have progressively become a common tool for the clinician. Because this domain has now reached a certain level of maturity it seems important and useful to provide a state of the scientific, technological and clinical achievements and still open issues. This book describes the short history of the domain, its specificity and constraints, and

  3. Service Robots

    DEFF Research Database (Denmark)

    Clemmensen, Torkil; Nielsen, Jeppe Agger; Andersen, Kim Normann

    The position presented in this paper is that in order to understand how service robots shape, and are being shaped by, the physical and social contexts in which they are used, we need to consider both work/organizational analysis and interaction design. We illustrate this with qualitative data...... and personal experiences to generate discussion about how to link these two traditions. This paper presents selected results from a case study that investigated the implementation and use of robot vacuum cleaners in Danish eldercare. The study demonstrates interpretive flexibility with variation...

  4. Robot Choreography

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Heath, Damith

    2016-01-01

    We propose a robust framework for combining performance paradigms with human robot interaction (HRI) research. Following an analysis of several case studies that combine the performing arts with HRI experiments, we propose a methodology and “best practices” for implementing choreography and other...... performance paradigms in HRI experiments. Case studies include experiments conducted in laboratory settings, “in the wild”, and live performance settings. We consider the technical and artistic challenges of designing and staging robots alongside humans in these various settings, and discuss how to combine...

  5. A Hybrid Neural Network Approach for Kinematic Modeling of a Novel 6-UPS Parallel Human-Like Mastication Robot

    Directory of Open Access Journals (Sweden)

    Hadi Kalani

    2016-04-01

    Full Text Available Introduction we aimed to introduce a 6-universal-prismatic-spherical (UPS parallel mechanism for the human jaw motion and theoretically evaluate its kinematic problem. We proposed a strategy to provide a fast and accurate solution to the kinematic problem. The proposed strategy could accelerate the process of solution-finding for the direct kinematic problem by reducing the number of required iterations in order to reach the desired accuracy level. Materials and Methods To overcome the direct kinematic problem, an artificial neural network and third-order Newton-Raphson algorithm were combined to provide an improved hybrid method. In this method, approximate solution was presented for the direct kinematic problem by the neural network. This solution could be considered as the initial guess for the third-order Newton-Raphson algorithm to provide an answer with the desired level of accuracy. Results The results showed that the proposed combination could help find a approximate solution and reduce the execution time for the direct kinematic problem, The results showed that muscular actuations showed periodic behaviors, and the maximum length variation of temporalis muscle was larger than that of masseter and pterygoid muscles. By reducing the processing time for solving the direct kinematic problem, more time could be devoted to control calculations.. In this method, for relatively high levels of accuracy, the number of iterations and computational time decreased by 90% and 34%, respectively, compared to the conventional Newton method. Conclusion The present analysis could allow researchers to characterize and study the mastication process by specifying different chewing patterns (e.g., muscle displacements.

  6. Robotic refueling machine

    International Nuclear Information System (INIS)

    Challberg, R.C.; Jones, C.R.

    1996-01-01

    One of the longest critical path operations performed during the outage is removing and replacing the fuel. A design is currently under development for a refueling machine which would allow faster, fully automated operation and would also allow the handling of two fuel assemblies at the same time. This design is different from current designs, (a) because of its lighter weight, making increased acceleration and speed possible, (b) because of its control system which makes locating the fuel assembly more dependable and faster, and (c) because of its dual handling system allowing simultaneous fuel movements. The new design uses two robotic arms to span a designated area of the vessel and the fuel storage area. Attached to the end of each robotic arm is a lightweight telescoping mast with a pendant attached to the end of each mast. The pendant acts as the base unit, allowing attachment of any number of end effectors depending on the servicing or inspection operation. Housed within the pendant are two television cameras used for the positioning control system. The control system is adapted from the robotics field using the technology known as machine vision, which provides both object and character recognition techniques to enable relative position control rather than absolute position control as in past designs. The pendant also contains thrusters that are used for fast, short distance, precise positioning. The new refueling machine system design is capable of a complete off load and reload of an 872 element core in about 5.3 days compared to 13 days for a conventional system

  7. Cloud Droplet Size and Liquid Water Path Retrievals From Zenith Radiance Measurements: Examples From the Atmospheric Radiation Measurement Program and the Aerosol Robotic Network

    Science.gov (United States)

    Chiu, J. C.; Marshak, A.; Huang, C.-H.; Varnai, T.; Hogan, R. J.; Giles, D. M.; Holben, B. N.; Knyazikhin, Y.; O'Connor, E. J.; Wiscombe, W. J.

    2012-01-01

    The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Network (AERONET) routinely monitor clouds using zenith radiances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a water-absorbing wavelength (i.e. 1640 nm) with a nonwater-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g/sq m and horizontal resolution of 201m, the retrieval method underestimates the mean effective radius by 0.8 m, with a root-mean-squared error of 1.7 m and a relative deviation of 13 %. For actual observations with a liquid water path less than 450 gm.2 at the ARM Oklahoma site during 2007-2008, our 1.5 min-averaged retrievals are generally larger by around 1 m than those from combined ground-based cloud radar and microwave radiometer at a 5min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 m and the relative deviation of 22% are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11% with satellite observations and have a negative bias of 1 m. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.

  8. Automatic Battery Swap System for Home Robots

    Directory of Open Access Journals (Sweden)

    Juan Wu

    2012-12-01

    Full Text Available This paper presents the design and implementation of an automatic battery swap system for the prolonged activities of home robots. A battery swap station is proposed to implement battery off-line recharging and on-line exchanging functions. It consists of a loading and unloading mechanism, a shifting mechanism, a locking device and a shell. The home robot is a palm-sized wheeled robot with an onboard camera and a removable battery case in the front. It communicates with the battery swap station wirelessly through ZigBee. The influences of battery case deflection and robot docking deflection on the battery swap operations have been investigated. The experimental results show that it takes an average time of 84.2s to complete the battery swap operations. The home robot does not have to wait several hours for the batteries to be fully charged. The proposed battery swap system is proved to be efficient in home robot applications that need the robots to work continuously over a long period.

  9. The Robots for Nuclear Power Plants

    International Nuclear Information System (INIS)

    Choi, Chang Hwan; Kim, Seung Ho; Kim, Chang Hoi; Seo, Yong Chil; Shin, Ho Cheol; Lee, Sung Uk; Jung, Kyung Min; Jung, Seung Ho; Choi, Young So

    2005-01-01

    Nuclear energy becomes a major energy source worldwide even though the debating environmental and safety dispute. In order to cope with the issues related to the nuclear power plant, the uncertain human factors need to be minimized by automating the inspection and maintenance work done by human workers. The demands of robotic system in nuclear industry have been growing to ensure the safety of nuclear facilities, to detect early unusual condition of it through an inspection, to protect the human workers from irradiation, and to maintain it efficiently. NRL (Nuclear Robotics Laboratory) in KAERI has been developing robotic systems to inspect and maintain nuclear power plants in stead of human workers for over thirteen years. In order to carry out useful tasks, a nuclear robot generally requires the followings. First, the robot should be protected against radiation. Second, a mobile system is required to access to the work place. Third, a kind of manipulator is required to complete the tasks such as handling radioactive wastes and other contaminated objects, etc. Fourth, a sensing system such as cameras, ultrasonic sensors, temperature sensors, dosimetry equipments etc., are required for operators to observe the work place. Lastly, a control system to help the operators control the robots. The control system generally consists of a supervisory control part and remote control part. The supervisory control part consists of a man-machine interface such as 3D graphics and a joystick. The remote control part manages the robot so that it follow the operator's command

  10. A Fast Vision System for Soccer Robot

    Directory of Open Access Journals (Sweden)

    Tianwu Yang

    2012-01-01

    Full Text Available This paper proposes a fast colour-based object recognition and localization for soccer robots. The traditional HSL colour model is modified for better colour segmentation and edge detection in a colour coded environment. The object recognition is based on only the edge pixels to speed up the computation. The edge pixels are detected by intelligently scanning a small part of whole image pixels which is distributed over the image. A fast method for line and circle centre detection is also discussed. For object localization, 26 key points are defined on the soccer field. While two or more key points can be seen from the robot camera view, the three rotation angles are adjusted to achieve a precise localization of robots and other objects. If no key point is detected, the robot position is estimated according to the history of robot movement and the feedback from the motors and sensors. The experiments on NAO and RoboErectus teen-size humanoid robots show that the proposed vision system is robust and accurate under different lighting conditions and can effectively and precisely locate robots and other objects.

  11. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    Directory of Open Access Journals (Sweden)

    Hooman Samani

    2013-12-01

    Full Text Available In this paper, we have investigated the concept of “Cultural Robotics” with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. According to the importance of the embodiment of robots in the sense of presence, the influence of robots in communication culture is anticipated. The sustainability of robotics culture based on diversity for cultural communities for various acceptance modalities is explored in order to anticipate the creation of different attributes of culture between robots and humans in the future.

  12. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  13. Quantitative analysis of distributed control paradigms of robot swarms

    DEFF Research Database (Denmark)

    Ngo, Trung Dung

    2010-01-01

    describe the physical and simulated robots, experiment scenario, and experiment setup. Third, we present our robot controllers based on behaviour based and neural network based paradigms. Fourth, we graphically show their experiment results and quantitatively analyse the results in comparison of the two......Given a task of designing controller for mobile robots in swarms, one might wonder which distributed control paradigms should be selected. Until now, paradigms of robot controllers have been within either behaviour based control or neural network based control, which have been recognized as two...... mainstreams of controller design for mobile robots. However, in swarm robotics, it is not clear how to determine control paradigms. In this paper we study the two control paradigms with various experiments of swarm aggregation. First, we introduce the two control paradigms for mobile robots. Second, we...

  14. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  15. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  16. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  17. Development of an unmanned agricultural robotics system for measuring crop conditions for precision aerial application

    Science.gov (United States)

    An Unmanned Agricultural Robotics System (UARS) is acquired, rebuilt with desired hardware, and operated in both classrooms and field. The UARS includes crop height sensor, crop canopy analyzer, normalized difference vegetative index (NDVI) sensor, multispectral camera, and hyperspectral radiometer...

  18. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  19. The conceptual design of the sensing system for patrolling and inspecting a nuclear facility by the intelligent robot

    International Nuclear Information System (INIS)

    Ebihara, Ken-ichi

    1993-11-01

    Supposing that an intelligent robot, instead of a human worker, patrols and inspects nuclear facilities, it is indispensable for such robot to be capable of moving with avoiding obstacles and recognizing various abnormal conditions, carrying out some ordered works based on information from sensors mounted on the robot. The present robots being practically used in nuclear facilities, however, have the limited capability such as identifying a few specific abnormal conditions using data detected by specific sensors on them. Hence, a conceptual design of a sensor-fusion-based system, which is named 'sensing system', has been performed to collect various kinds of information required for patrol and inspection. This sensing system combines a visual sensor, which consists of a monocular camera and a range finder by the active stereopsis method, an olfactory, acoustic and dose sensors. This report describes the hardware configuration and the software function for processing sensed data. An idea of sensor fusion and the preliminary consideration in respect of applying the neural network to image data processing are also described. (author)

  20. Robotic Surgery

    Science.gov (United States)

    Childress, Vincent W.

    2007-01-01

    The medical field has many uses for automated and remote-controlled technology. For example, if a tissue sample is only handled in the laboratory by a robotic handling system, then it will never come into contact with a human. Such a system not only helps to automate the medical testing process, but it also helps to reduce the chances of…

  1. Laws on Robots, Laws by Robots, Laws in Robots : Regulating Robot Behaviour by Design

    NARCIS (Netherlands)

    Leenes, R.E.; Lucivero, F.

    2015-01-01

    Speculation about robot morality is almost as old as the concept of a robot itself. Asimov’s three laws of robotics provide an early and well-discussed example of moral rules robots should observe. Despite the widespread influence of the three laws of robotics and their role in shaping visions of

  2. LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone

    Directory of Open Access Journals (Sweden)

    Phong Ha Nguyen

    2018-05-01

    Full Text Available Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker’s location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.

  3. LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone.

    Science.gov (United States)

    Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung

    2018-05-24

    Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.

  4. Live video monitoring robot controlled by web over internet

    Science.gov (United States)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  5. Vision-Based Recognition of Activities by a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Mounîm A. El-Yacoubi

    2015-12-01

    Full Text Available We present an autonomous assistive robotic system for human activity recognition from video sequences. Due to the large variability inherent to video capture from a non-fixed robot (as opposed to a fixed camera, as well as the robot's limited computing resources, implementation has been guided by robustness to this variability and by memory and computing speed efficiency. To accommodate motion speed variability across users, we encode motion using dense interest point trajectories. Our recognition model harnesses the dense interest point bag-of-words representation through an intersection kernel-based SVM that better accommodates the large intra-class variability stemming from a robot operating in different locations and conditions. To contextually assess the engine as implemented in the robot, we compare it with the most recent approaches of human action recognition performed on public datasets (non-robot-based, including a novel approach of our own that is based on a two-layer SVM-hidden conditional random field sequential recognition model. The latter's performance is among the best within the recent state of the art. We show that our robot-based recognition engine, while less accurate than the sequential model, nonetheless shows good performances, especially given the adverse test conditions of the robot, relative to those of a fixed camera.

  6. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    Science.gov (United States)

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  7. Accuracy in Robot Generated Image Data Sets

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Bjorholm

    2015-01-01

    In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... error is deterministic, and can as such be calibrated away. We have successfully used this innovation in our efforts for creating data sets for computer vision. Since the use of this innovation has a significant effect on the data set quality, we here present it in some detail, to better aid others...

  8. Analyzing Cyber-Physical Threats on Robotic Platforms.

    Science.gov (United States)

    Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J

    2018-05-21

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  9. Analyzing Cyber-Physical Threats on Robotic Platforms †

    Science.gov (United States)

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403

  10. Analyzing Cyber-Physical Threats on Robotic Platforms

    Directory of Open Access Journals (Sweden)

    Khalil M. Ahmad Yousef

    2018-05-01

    Full Text Available Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  11. Automatic Detection of Compensation During Robotic Stroke Rehabilitation Therapy.

    Science.gov (United States)

    Zhi, Ying Xuan; Lukasik, Michelle; Li, Michael H; Dolatabadi, Elham; Wang, Rosalie H; Taati, Babak

    2018-01-01

    Robotic stroke rehabilitation therapy can greatly increase the efficiency of therapy delivery. However, when left unsupervised, users often compensate for limitations in affected muscles and joints by recruiting unaffected muscles and joints, leading to undesirable rehabilitation outcomes. This paper aims to develop a computer vision system that augments robotic stroke rehabilitation therapy by automatically detecting such compensatory motions. Nine stroke survivors and ten healthy adults participated in this study. All participants completed scripted motions using a table-top rehabilitation robot. The healthy participants also simulated three types of compensatory motions. The 3-D trajectories of upper body joint positions tracked over time were used for multiclass classification of postures. A support vector machine (SVM) classifier detected lean-forward compensation from healthy participants with excellent accuracy (AUC = 0.98, F1 = 0.82), followed by trunk-rotation compensation (AUC = 0.77, F1 = 0.57). Shoulder-elevation compensation was not well detected (AUC = 0.66, F1 = 0.07). A recurrent neural network (RNN) classifier, which encodes the temporal dependency of video frames, obtained similar results. In contrast, F1-scores in stroke survivors were low for all three compensations while using RNN: lean-forward compensation (AUC = 0.77, F1 = 0.17), trunk-rotation compensation (AUC = 0.81, F1 = 0.27), and shoulder-elevation compensation (AUC = 0.27, F1 = 0.07). The result was similar while using SVM. To improve detection accuracy for stroke survivors, future work should focus on predefining the range of motion, direct camera placement, delivering exercise intensity tantamount to that of real stroke therapies, adjusting seat height, and recording full therapy sessions.

  12. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    Science.gov (United States)

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  13. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  14. Service Oriented Robotic Architecture for Space Robotics: Design, Testing, and Lessons Learned

    Science.gov (United States)

    Fluckiger, Lorenzo Jean Marc E; Utz, Hans Heinrich

    2013-01-01

    This paper presents the lessons learned from six years of experiments with planetary rover prototypes running the Service Oriented Robotic Architecture (SORA) developed by the Intelligent Robotics Group (IRG) at the NASA Ames Research Center. SORA relies on proven software engineering methods and technologies applied to space robotics. Based on a Service Oriented Architecture and robust middleware, SORA encompasses on-board robot control and a full suite of software tools necessary for remotely operated exploration missions. SORA has been eld tested in numerous scenarios of robotic lunar and planetary exploration. The experiments conducted by IRG with SORA exercise a large set of the constraints encountered in space applications: remote robotic assets, ight relevant science instruments, distributed operations, high network latencies and unreliable or intermittent communication links. In this paper, we present the results of these eld tests in regard to the developed architecture, and discuss its bene ts and limitations.

  15. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  16. Design And Control Of Agricultural Robot For Tomato Plants Treatment And Harvesting

    Science.gov (United States)

    Sembiring, Arnes; Budiman, Arif; Lestari, Yuyun D.

    2017-12-01

    Although Indonesia is one of the biggest agricultural country in the world, implementation of robotic technology, otomation and efficiency enhancement in agriculture process hasn’t extensive yet. This research proposed a low cost agricultural robot architecture. The robot could help farmer to survey their farm area, treat the tomato plants and harvest the ripe tomatoes. Communication between farmer and robot was facilitated by wireless line using radio wave to reach wide area (120m radius). The radio wave was combinated with Bluetooth to simplify the communication between robot and farmer’s Android smartphone. The robot was equipped with a camera, so the farmers could survey the farm situation through 7 inch monitor display real time. The farmers controlled the robot and arm movement through an user interface in Android smartphone. The user interface contains control icons that allow farmers to control the robot movement (formard, reverse, turn right and turn left) and cut the spotty leaves or harvest the ripe tomatoes.

  17. ON TRAVERSABILITY COST EVALUATION FROM PROPRIOCEPTIVE SENSING FOR A CRAWLING ROBOT

    Directory of Open Access Journals (Sweden)

    Jakub Mrva

    2015-12-01

    Full Text Available Traversability characteristics of the robot working environment are crucial in planning an efficient path for a robot operating in rough unstructured areas. In the literature, approaches to wheeled or tracked robots can be found, but a relatively little attention is given to walking multi-legged robots. Moreover, the existing approaches for terrain traversability assessment seem to be focused on gathering key features from a terrain model acquired from range data or camera image and only occasionally supplemented with proprioceptive sensing that expresses the interaction of the robot with the terrain. This paper addresses the problem of traversability cost evaluation based on proprioceptive sensing for a hexapod walking robot while optimizing different criteria. We present several methods of evaluating the robot-terrain interaction that can be used as a cost function for an assessment of the robot motion that can be utilized in high-level path-planning algorithms.

  18. Watching elderly and disabled person's physical condition by remotely controlled monorail robot

    Science.gov (United States)

    Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru

    2001-10-01

    We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.

  19. Micro Robotics Lab

    Data.gov (United States)

    Federal Laboratory Consortium — Our research is focused on the challenges of engineering robotic systems down to sub-millimeter size scales. We work both on small mobile robots (robotic insects for...

  20. Robots of the Future

    Indian Academy of Sciences (India)

    two main types of robots: industrial robots, and autonomous robots. .... position); it also has a virtual CPU with two stacks and three registers that hold 32-bit strings. Each item ..... just like we can aggregate images, text, and information from.