WorldWideScience

Sample records for hand-robot interfaces allowing

  1. Hand Motion-Based Remote Control Interface with Vibrotactile Feedback for Home Robots

    Directory of Open Access Journals (Sweden)

    Juan Wu

    2013-06-01

    Full Text Available This paper presents the design and implementation of a hand-held interface system for the locomotion control of home robots. A handheld controller is proposed to implement hand motion recognition and hand motion-based robot control. The handheld controller can provide a ‘connect-and-play’ service for the users to control the home robot with visual and vibrotactile feedback. Six natural hand gestures are defined for navigating the home robots. A three-axis accelerometer is used to detect the hand motions of the user. The recorded acceleration data are analysed and classified to corresponding control commands according to their characteristic curves. A vibration motor is used to provide vibrotactile feedback to the user when an improper operation is performed. The performances of the proposed hand motion-based interface and the traditional keyboard and mouse interface have been compared in robot navigation experiments. The experimental results of home robot navigation show that the success rate of the handheld controller is 13.33% higher than the PC based controller. The precision of the handheld controller is 15.4% more than that of the PC and the execution time is 24.7% less than the PC based controller. This means that the proposed hand motion-based interface is more efficient and flexible.

  2. Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke.

    Science.gov (United States)

    McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A

    2017-06-28

    To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation. The development progression of robotic-aided hand physiotherapy devices and brain-machine interface systems is outlined, focussing on those with mechanisms and control strategies designed to improve recovery outcomes of the hand post-stroke. A total of 110 commercial and non-commercial hand and wrist devices, spanning the 2 major core designs: end-effector and exoskeleton are reviewed. The growing body of evidence on the efficacy and relevance of incorporating brain-machine interfaces in stroke rehabilitation is summarized. The challenges involved in integrating robotic rehabilitation into the healthcare system are discussed. This review provides novel insights into the use of robotics in physiotherapy practice, and may help system designers to develop new devices.

  3. User interface for a tele-operated robotic hand system

    Science.gov (United States)

    Crawford, Anthony L

    2015-03-24

    Disclosed here is a user interface for a robotic hand. The user interface anchors a user's palm in a relatively stationary position and determines various angles of interest necessary for a user's finger to achieve a specific fingertip location. The user interface additionally conducts a calibration procedure to determine the user's applicable physiological dimensions. The user interface uses the applicable physiological dimensions and the specific fingertip location, and treats the user's finger as a two link three degree-of-freedom serial linkage in order to determine the angles of interest. The user interface communicates the angles of interest to a gripping-type end effector which closely mimics the range of motion and proportions of a human hand. The user interface requires minimal contact with the operator and provides distinct advantages in terms of available dexterity, work space flexibility, and adaptability to different users.

  4. User interface for a tele-operated robotic hand system

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, Anthony L

    2015-03-24

    Disclosed here is a user interface for a robotic hand. The user interface anchors a user's palm in a relatively stationary position and determines various angles of interest necessary for a user's finger to achieve a specific fingertip location. The user interface additionally conducts a calibration procedure to determine the user's applicable physiological dimensions. The user interface uses the applicable physiological dimensions and the specific fingertip location, and treats the user's finger as a two link three degree-of-freedom serial linkage in order to determine the angles of interest. The user interface communicates the angles of interest to a gripping-type end effector which closely mimics the range of motion and proportions of a human hand. The user interface requires minimal contact with the operator and provides distinct advantages in terms of available dexterity, work space flexibility, and adaptability to different users.

  5. NONLINEAR FORCE PROFILE USED TO INCREASE THE PERFORMANCE OF A HAPTIC USER INTERFACE FOR TELEOPERATING A ROBOTIC HAND

    Energy Technology Data Exchange (ETDEWEB)

    Anthony L. Crawford

    2012-07-01

    MODIFIED PAPER TITLE AND ABSTRACT DUE TO SLIGHTLY MODIFIED SCOPE: TITLE: Nonlinear Force Profile Used to Increase the Performance of a Haptic User Interface for Teleoperating a Robotic Hand Natural movements and force feedback are important elements in using teleoperated equipment if complex and speedy manipulation tasks are to be accomplished in hazardous environments, such as hot cells, glove boxes, decommissioning, explosives disarmament, and space. The research associated with this paper hypothesizes that a user interface and complementary radiation compatible robotic hand that integrates the human hand’s anthropometric properties, speed capability, nonlinear strength profile, reduction of active degrees of freedom during the transition from manipulation to grasping, and just noticeable difference force sensation characteristics will enhance a user’s teleoperation performance. The main contribution of this research is in that a system that concisely integrates all these factors has yet to be developed and furthermore has yet to be applied to a hazardous environment as those referenced above. In fact, the most prominent slave manipulator teleoperation technology in use today is based on a design patented in 1945 (Patent 2632574) [1]. The robotic hand/user interface systems of similar function as the one being developed in this research limit their design input requirements in the best case to only complementing the hand’s anthropometric properties, speed capability, and linearly scaled force application relationship (e.g. robotic force is a constant, 4 times that of the user). In this paper a nonlinear relationship between the force experienced between the user interface and the robotic hand was devised based on property differences of manipulation and grasping activities as they pertain to the human hand. The results show that such a relationship when subjected to a manipulation task and grasping task produces increased performance compared to the

  6. Robotic hand and fingers

    Science.gov (United States)

    Salisbury, Curt Michael; Dullea, Kevin J.

    2017-06-06

    Technologies pertaining to a robotic hand are described herein. The robotic hand includes one or more fingers releasably attached to a robotic hand frame. The fingers can abduct and adduct as well as flex and tense. The fingers are releasably attached to the frame by magnets that allow for the fingers to detach from the frame when excess force is applied to the fingers.

  7. Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke

    OpenAIRE

    McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A

    2017-01-01

    OBJECTIVE: To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation.METHODS: The development progression of ro...

  8. Robotic hand project

    OpenAIRE

    Karaçizmeli, Cengiz; Çakır, Gökçe; Tükel, Dilek

    2014-01-01

    In this work, the mechatronic based robotic hand is controlled by the position data taken from the glove which has flex sensors mounted to capture finger bending of the human hand. The angular movement of human hand’s fingers are perceived and processed by a microcontroller, and the robotic hand is controlled by actuating servo motors. It has seen that robotic hand can simulate the movement of the human hand that put on the glove, during tests have done. This robotic hand can be used not only...

  9. Interfacing robotics with plutonium fuel fabrication

    International Nuclear Information System (INIS)

    Bowen, W.W.; Moore, F.W.

    1986-01-01

    Interfacing robotic systems with nuclear fuel fabrication processes resulted in a number of interfacing challenges. The system not only interfaces with the fuel process, but must also interface with nuclear containment, radiation control boundaries, criticality control restrictions, and numerous other safety systems required in a fuel fabrication plant. The robotic system must be designed to allow operator interface during maintenance and recovery from an upset as well as normal operations

  10. Vision-Based Interfaces Applied to Assistive Robots

    Directory of Open Access Journals (Sweden)

    Elisa Perez

    2013-02-01

    Full Text Available This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.

  11. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    Directory of Open Access Journals (Sweden)

    Giuseppe Airò Farulla

    2016-02-01

    Full Text Available Vision-based Pose Estimation (VPE represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.

  12. Robotically enhanced rubber hand illusion.

    Science.gov (United States)

    Arata, Jumpei; Hattori, Masashi; Ichikawa, Shohei; Sakaguchi, Masamichi

    2014-01-01

    The rubber hand illusion is a well-known multisensory illusion. In brief, watching a rubber hand being stroked by a paintbrush while one's own unseen hand is synchronously stroked causes the rubber hand to be attributed to one's own body and to "feel like it's my hand." The rubber hand illusion is thought to be triggered by the synchronized tactile stimulation of both the subject's hand and the fake hand. To extend the conventional rubber hand illusion, we introduce robotic technology in the form of a master-slave telemanipulator. The developed one degree-of-freedom master-slave system consists of an exoskeleton master equipped with an optical encoder that is worn on the subject's index finger and a motor-actuated index finger on the rubber hand, which allows the subject to perform unilateral telemanipulation. The moving rubber hand illusion has been studied by several researchers in the past with mechanically connected rigs between the subject's body and the fake limb. The robotic instruments let us investigate the moving rubber hand illusion with less constraints, thus behaving closer to the classic rubber hand illusion. In addition, the temporal delay between the body and the fake limb can be precisely manipulated. The experimental results revealed that the robotic instruments significantly enhance the rubber hand illusion. The time delay is significantly correlated with the effect of the multisensory illusion, and the effect significantly decreased at time delays over 100 ms. These findings can potentially contribute to the investigations of neural mechanisms in the field of neuroscience and of master-slave systems in the field of robotics.

  13. A user interface for mobile robotized tele-echography

    International Nuclear Information System (INIS)

    Triantafyllidis, G.A.; Thomos, N.; Canero, C.; Vieyres, P.; Strintzis, M.G.

    2006-01-01

    Ultrasound imaging allows the evaluation of the degree of emergency of a patient. However, in many situations no experienced sonographer is available to perform such echography. To cope with this issue, the OTELO project 'mObile Tele-Echography using an ultra-Light rObot' (OTELO) aims to develop a fully integrated end-to-end mobile tele-echography system using an ultralight, remotely controlled six degree-of-freedom (DOF) robot. In this context, this paper deals with the user interface environment of the OTELO system, composed by the following parts: an ultrasound video transmission system providing real-time images of the scanned area at each moment, an audio/video conference to communicate with the paramedical assistant and the patient, and finally a virtual reality environment, providing visual and haptic feedback to the expert, while capturing the expert's hand movements with a one-DOF hand free input device

  14. Double nerve intraneural interface implant on a human amputee for robotic hand control.

    Science.gov (United States)

    Rossini, Paolo M; Micera, Silvestro; Benvenuto, Antonella; Carpaneto, Jacopo; Cavallo, Giuseppe; Citi, Luca; Cipriani, Christian; Denaro, Luca; Denaro, Vincenzo; Di Pino, Giovanni; Ferreri, Florinda; Guglielmelli, Eugenio; Hoffmann, Klaus-Peter; Raspopovic, Stanisa; Rigosa, Jacopo; Rossini, Luca; Tombini, Mario; Dario, Paolo

    2010-05-01

    The principle underlying this project is that, despite nervous reorganization following upper limb amputation, original pathways and CNS relays partially maintain their function and can be exploited for interfacing prostheses. Aim of this study is to evaluate a novel peripheral intraneural multielectrode for multi-movement prosthesis control and for sensory feed-back, while assessing cortical reorganization following the re-acquired stream of data. Four intrafascicular longitudinal flexible multielectrodes (tf-LIFE4) were implanted in the median and ulnar nerves of an amputee; they reliably recorded output signals for 4 weeks. Artificial intelligence classifiers were used off-line to analyse LIFE signals recorded during three distinct hand movements under voluntary order. Real-time control of motor output was achieved for the three actions. When applied off-line artificial intelligence reached >85% real-time correct classification of trials. Moreover, different types of current stimulation were determined to allow reproducible and localized hand/fingers sensations. Cortical organization was observed via TMS in parallel with partial resolution of symptoms due to the phantom-limb syndrome (PLS). tf-LIFE4s recorded output signals in human nerves for 4 weeks, though the efficacy of sensory stimulation decayed after 10 days. Recording from a number of fibres permitted a high percentage of distinct actions to be classified correctly. Reversal of plastic changes and alleviation of PLS represent corollary findings of potential therapeutic benefit. This study represents a breakthrough in robotic hand use in amputees. Copyright 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  15. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    Science.gov (United States)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  16. Multi-robot control interface

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-12-06

    Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes a multi-robot common window comprised of information received from each of the plurality of robots.

  17. SPONGE ROBOTIC HAND DESIGN FOR PROSTHESES

    OpenAIRE

    Mine Seçkin

    2016-01-01

    In this study robotic hands and fingers’ materials are investigated from past to present and a sponge robotic hand is designed for biomedical applications. Emergence and necessity of soft robotic technology are explained and description of soft robot is made. Because of the importance of hand in a person’s body, researchers have dealt with robotic hand prostheses for many centuries and developed many hand types. To mimic the best for the human limbs, softness of the hand is one of the importa...

  18. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  19. Hand-held medical robots.

    Science.gov (United States)

    Payne, Christopher J; Yang, Guang-Zhong

    2014-08-01

    Medical robots have evolved from autonomous systems to tele-operated platforms and mechanically-grounded, cooperatively-controlled robots. Whilst these approaches have seen both commercial and clinical success, uptake of these robots remains moderate because of their high cost, large physical footprint and long setup times. More recently, researchers have moved toward developing hand-held robots that are completely ungrounded and manipulated by surgeons in free space, in a similar manner to how conventional instruments are handled. These devices provide specific functions that assist the surgeon in accomplishing tasks that are otherwise challenging with manual manipulation. Hand-held robots have the advantages of being compact and easily integrated into the normal surgical workflow since there is typically little or no setup time. Hand-held devices can also have a significantly reduced cost to healthcare providers as they do not necessitate the complex, multi degree-of-freedom linkages that grounded robots require. However, the development of such devices is faced with many technical challenges, including miniaturization, cost and sterility, control stability, inertial and gravity compensation and robust instrument tracking. This review presents the emerging technical trends in hand-held medical robots and future development opportunities for promoting their wider clinical uptake.

  20. The human hand as an inspiration for robot hand development

    CERN Document Server

    Santos, Veronica

    2014-01-01

    “The Human Hand as an Inspiration for Robot Hand Development” presents an edited collection of authoritative contributions in the area of robot hands. The results described in the volume are expected to lead to more robust, dependable, and inexpensive distributed systems such as those endowed with complex and advanced sensing, actuation, computation, and communication capabilities. The twenty-four chapters discuss the field of robotic grasping and manipulation viewed in light of the human hand’s capabilities and push the state-of-the-art in robot hand design and control. Topics discussed include human hand biomechanics, neural control, sensory feedback and perception, and robotic grasp and manipulation. This book will be useful for researchers from diverse areas such as robotics, biomechanics, neuroscience, and anthropologists.

  1. Robotic approaches for rehabilitation of hand function after stroke.

    Science.gov (United States)

    Lum, Peter S; Godfrey, Sasha B; Brokaw, Elizabeth B; Holley, Rahsaan J; Nichols, Diane

    2012-11-01

    The goal of this review was to discuss the impairments in hand function after stroke and present previous work on robot-assisted approaches to movement neurorehabilitation. Robotic devices offer a unique training environment that may enhance outcomes beyond what is possible with conventional means. Robots apply forces to the hand, allowing completion of movements while preventing inappropriate movement patterns. Evidence from the literature is emerging that certain characteristics of the human-robot interaction are preferable. In light of this evidence, the robotic hand devices that have undergone clinical testing are reviewed, highlighting the authors' work in this area. Finally, suggestions for future work are offered. The ability to deliver therapy doses far higher than what has been previously tested is a potentially key advantage of robotic devices that needs further exploration. In particular, more efforts are needed to develop highly motivating home-based devices, which can increase access to high doses of assisted movement therapy.

  2. The future of robotics in hand surgery.

    Science.gov (United States)

    Liverneaux, P; Nectoux, E; Taleb, C

    2009-10-01

    Robotics has spread over many surgical fields over the last decade: orthopaedic, cardiovascular, urologic, gynaecologic surgery and various other types of surgery. There are five different types of robots: passive, semiactive and active robots, telemanipulators and simulators. Hand surgery is at a crossroad between orthopaedic surgery, plastic surgery and microsurgery; it has to deal with fixing all sorts of tissues from bone to soft tissues. To our knowledge, there is not any paper focusing on potential clinical applications in this realm, even though robotics could be helpful for hand surgery. One must point out the numerous works on bone tissue with regard to passive robots (such as fluoroscopic navigation as an ancillary for percutaneous screwing in the scaphoid bone). Telemanipulators, especially in microsurgery, can improve surgical motion by suppressing physiological tremor thanks to movement demultiplication (experimental vascular and nervous sutures previously published). To date, the robotic technology has not yet become simple-to-use, cheap and flawless but in the future, it will probably be of great technical help, and even allow remote-controlled surgery overseas.

  3. Soft Robotic Haptic Interface with Variable Stiffness for Rehabilitation of Neurologically Impaired Hand Function

    Directory of Open Access Journals (Sweden)

    Frederick Sebastian

    2017-12-01

    Full Text Available The human hand comprises complex sensorimotor functions that can be impaired by neurological diseases and traumatic injuries. Effective rehabilitation can bring the impaired hand back to a functional state because of the plasticity of the central nervous system to relearn and remodel the lost synapses in the brain. Current rehabilitation therapies focus on strengthening motor skills, such as grasping, employ multiple objects of varying stiffness so that affected persons can experience a wide range of strength training. These devices have limited range of stiffness due to the rigid mechanisms employed in their variable stiffness actuators. This paper presents a novel soft robotic haptic device for neuromuscular rehabilitation of the hand, which is designed to offer adjustable stiffness and can be utilized in both clinical and home settings. The device eliminates the need for multiple objects by employing a pneumatic soft structure made with highly compliant materials that act as the actuator of the haptic interface. It is made with interchangeable sleeves that can be customized to include materials of varying stiffness to increase the upper limit of the stiffness range. The device is fabricated using existing 3D printing technologies, and polymer molding and casting techniques, thus keeping the cost low and throughput high. The haptic interface is linked to either an open-loop system that allows for an increased pressure during usage or closed-loop system that provides pressure regulation in accordance to the stiffness the user specifies. Preliminary evaluation is performed to characterize the effective controllable region of variance in stiffness. It was found that the region of controllable stiffness was between points 3 and 7, where the stiffness appeared to plateau with each increase in pressure. The two control systems are tested to derive relationships between internal pressure, grasping force exertion on the surface, and displacement using

  4. Hand Rehabilitation Robotics on Poststroke Motor Recovery

    Science.gov (United States)

    2017-01-01

    The recovery of hand function is one of the most challenging topics in stroke rehabilitation. Although the robot-assisted therapy has got some good results in the latest decades, the development of hand rehabilitation robotics is left behind. Existing reviews of hand rehabilitation robotics focus either on the mechanical design on designers' view or on the training paradigms on the clinicians' view, while these two parts are interconnected and both important for designers and clinicians. In this review, we explore the current literature surrounding hand rehabilitation robots, to help designers make better choices among varied components and thus promoting the application of hand rehabilitation robots. An overview of hand rehabilitation robotics is provided in this paper firstly, to give a general view of the relationship between subjects, rehabilitation theories, hand rehabilitation robots, and its evaluation. Secondly, the state of the art hand rehabilitation robotics is introduced in detail according to the classification of the hardware system and the training paradigm. As a result, the discussion gives available arguments behind the classification and comprehensive overview of hand rehabilitation robotics. PMID:29230081

  5. Controlling Kuka Industrial Robots : Flexible Communication Interface JOpenShowVar.

    OpenAIRE

    Sanfilippo, Filippo; Hatledal, Lars Ivar; Zhang, Houxiang; Fago, Massimiliano; Pettersen, Kristin Ytterstad

    2015-01-01

    JOpenShowVar is a Java open-source cross-platform communication interface to Kuka industrial robots. This novel interface allows for read-write use of the controlled manipulator variables and data structures. JOpenShowVar, which is compatible with all the Kuka industrial robots that use KUKA Robot Controller version 4 (KR C4) and KUKA Robot Controller version 2 (KR C2), runs as a client on a remote computer connected with the Kuka controller via TCP/IP. Even though only soft real-time applica...

  6. 3D Printed Robotic Hand

    Science.gov (United States)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  7. Hand Gesture Based Wireless Robotic Arm Control for Agricultural Applications

    Science.gov (United States)

    Kannan Megalingam, Rajesh; Bandhyopadhyay, Shiva; Vamsy Vivek, Gedela; Juned Rahi, Muhammad

    2017-08-01

    One of the major challenges in agriculture is harvesting. It is very hard and sometimes even unsafe for workers to go to each plant and pluck fruits. Robotic systems are increasingly combined with new technologies to automate or semi automate labour intensive work, such as e.g. grape harvesting. In this work we propose a semi-automatic method for aid in harvesting fruits and hence increase productivity per man hour. A robotic arm fixed to a rover roams in the in orchard and the user can control it remotely using the hand glove fixed with various sensors. These sensors can position the robotic arm remotely to harvest the fruits. In this paper we discuss the design of hand glove fixed with various sensors, design of 4 DoF robotic arm and the wireless control interface. In addition the setup of the system and the testing and evaluation under lab conditions are also presented in this paper.

  8. A multi-DOF robotic exoskeleton interface for hand motion assistance.

    Science.gov (United States)

    Iqbal, Jamshed; Tsagarakis, Nikos G; Caldwell, Darwin G

    2011-01-01

    This paper outlines the design and development of a robotic exoskeleton based rehabilitation system. A portable direct-driven optimized hand exoskeleton system has been proposed. The optimization procedure primarily based on matching the exoskeleton and finger workspaces guided the system design. The selection of actuators for the proposed system has emerged as a result of experiments with users of different hand sizes. Using commercial sensors, various hand parameters, e.g. maximum and average force levels have been measured. The results of these experiments have been mapped directly to the mechanical design of the system. An under-actuated optimum mechanism has been analysed followed by the design and realization of the first prototype. The system provides both position and force feedback sensory information which can improve the outcomes of a professional rehabilitation exercise.

  9. A user interface for mobile robotized tele-echography

    Energy Technology Data Exchange (ETDEWEB)

    Triantafyllidis, G.A. [Informatics and Telematics Institute ITI-CERTH, Thessaloniki (Greece)]. E-mail: gatrian@iti.gr; Thomos, N. [Informatics and Telematics Institute ITI-CERTH, Thessaloniki (Greece); Canero, C. [Computer Vision Center, UAB, Barcelona (Spain); Vieyres, P. [Laboratoire Vision and Robotique Universite d' Orleans, Bourges (France); Strintzis, M.G. [Informatics and Telematics Institute ITI-CERTH, Thessaloniki (Greece)

    2006-12-20

    Ultrasound imaging allows the evaluation of the degree of emergency of a patient. However, in many situations no experienced sonographer is available to perform such echography. To cope with this issue, the OTELO project 'mObile Tele-Echography using an ultra-Light rObot' (OTELO) aims to develop a fully integrated end-to-end mobile tele-echography system using an ultralight, remotely controlled six degree-of-freedom (DOF) robot. In this context, this paper deals with the user interface environment of the OTELO system, composed by the following parts: an ultrasound video transmission system providing real-time images of the scanned area at each moment, an audio/video conference to communicate with the paramedical assistant and the patient, and finally a virtual reality environment, providing visual and haptic feedback to the expert, while capturing the expert's hand movements with a one-DOF hand free input device.

  10. Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators

    Science.gov (United States)

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2013-08-01

    Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.

  11. An EMG-Controlled Robotic Hand Exoskeleton for Bilateral Rehabilitation.

    Science.gov (United States)

    Leonardis, Daniele; Barsotti, Michele; Loconsole, Claudio; Solazzi, Massimiliano; Troncossi, Marco; Mazzotti, Claudio; Castelli, Vincenzo Parenti; Procopio, Caterina; Lamola, Giuseppe; Chisari, Carmelo; Bergamasco, Massimo; Frisoli, Antonio

    2015-01-01

    This paper presents a novel electromyography (EMG)-driven hand exoskeleton for bilateral rehabilitation of grasping in stroke. The developed hand exoskeleton was designed with two distinctive features: (a) kinematics with intrinsic adaptability to patient's hand size, and (b) free-palm and free-fingertip design, preserving the residual sensory perceptual capability of touch during assistance in grasping of real objects. In the envisaged bilateral training strategy, the patient's non paretic hand acted as guidance for the paretic hand in grasping tasks. Grasping force exerted by the non paretic hand was estimated in real-time from EMG signals, and then replicated as robotic assistance for the paretic hand by means of the hand-exoskeleton. Estimation of the grasping force through EMG allowed to perform rehabilitation exercises with any, non sensorized, graspable objects. This paper presents the system design, development, and experimental evaluation. Experiments were performed within a group of six healthy subjects and two chronic stroke patients, executing robotic-assisted grasping tasks. Results related to performance in estimation and modulation of the robotic assistance, and to the outcomes of the pilot rehabilitation sessions with stroke patients, positively support validity of the proposed approach for application in stroke rehabilitation.

  12. Anthropomorphic Robot Hand And Teaching Glove

    Science.gov (United States)

    Engler, Charles D., Jr.

    1991-01-01

    Robotic forearm-and-hand assembly manipulates objects by performing wrist and hand motions with nearly human grasping ability and dexterity. Imitates hand motions of human operator who controls robot in real time by programming via exoskeletal "teaching glove". Telemanipulator systems based on this robotic-hand concept useful where humanlike dexterity required. Underwater, high-radiation, vacuum, hot, cold, toxic, or inhospitable environments potential application sites. Particularly suited to assisting astronauts on space station in safely executing unexpected tasks requiring greater dexterity than standard gripper.

  13. Robotic and user interface solutions for hazardous and remote applications

    International Nuclear Information System (INIS)

    Schempf, H.

    1997-01-01

    Carnegie Mellon University (CMU) is developing novel robotic and user interface systems to assist in the cleanup activities undertaken by the U.S. Department of Energy (DOE). Under DOE's EM-50 funding and administered by the Federal Energy Technology Center (FETC), CMU has developed a novel asbestos pipe-insulation abatement robot system, called BOA, and a novel generic user interface control and training console, dubbed RoboCon. The use of BOA will allow the speedier abatement of the vast DOE piping networks clad with hazardous and contaminated asbestos insulation by which overall job costs can be reduced by as much as 50%. RoboCon will allow the DOE to evaluate different remote and robotic system technologies from the overall man-machine performance standpoint, as well as provide a standardized training platform for training site operators in the operation of remote and robotic equipment

  14. Multi-fingered robotic hand

    Science.gov (United States)

    Ruoff, Carl F. (Inventor); Salisbury, Kenneth, Jr. (Inventor)

    1990-01-01

    A robotic hand is presented having a plurality of fingers, each having a plurality of joints pivotally connected one to the other. Actuators are connected at one end to an actuating and control mechanism mounted remotely from the hand and at the other end to the joints of the fingers for manipulating the fingers and passing externally of the robot manipulating arm in between the hand and the actuating and control mechanism. The fingers include pulleys to route the actuators within the fingers. Cable tension sensing structure mounted on a portion of the hand are disclosed, as is covering of the tip of each finger with a resilient and pliable friction enhancing surface.

  15. Electromyography data for non-invasive naturally-controlled robotic hand prostheses.

    Science.gov (United States)

    Atzori, Manfredo; Gijsberts, Arjan; Castellini, Claudio; Caputo, Barbara; Hager, Anne-Gabrielle Mittaz; Elsig, Simone; Giatsidis, Giorgio; Bassetto, Franco; Müller, Henning

    2014-01-01

    Recent advances in rehabilitation robotics suggest that it may be possible for hand-amputated subjects to recover at least a significant part of the lost hand functionality. The control of robotic prosthetic hands using non-invasive techniques is still a challenge in real life: myoelectric prostheses give limited control capabilities, the control is often unnatural and must be learned through long training times. Meanwhile, scientific literature results are promising but they are still far from fulfilling real-life needs. This work aims to close this gap by allowing worldwide research groups to develop and test movement recognition and force control algorithms on a benchmark scientific database. The database is targeted at studying the relationship between surface electromyography, hand kinematics and hand forces, with the final goal of developing non-invasive, naturally controlled, robotic hand prostheses. The validation section verifies that the data are similar to data acquired in real-life conditions, and that recognition of different hand tasks by applying state-of-the-art signal features and machine-learning algorithms is possible.

  16. Design of Piano -playing Robotic Hand

    Directory of Open Access Journals (Sweden)

    Lin Jen-Chang

    2013-09-01

    Full Text Available Unlike the market slowdown of industrial robots, service & entertainment robots have been highly regarded by most robotics reseach and market research agencies. In this study we developed a music playing robot (which can also work as a service robot for public performance. The research is mainly focused on the mechanical and electrical control of piano-playing robot, the exploration of correlations among music theory, rhythm and piano keys, and eventually the research on playing skill of keyboard instrument. The piano-playing robot is capable of control linear motor, servo-motor and pneumatic devices in accordance with the notes and rhythm in order to drive the mechanical structure to proper positions for pressing the keys and generating music. The devices used for this robot are mainly crucial components produced by HIWIN Technology Corp. The design of robotic hand is based on the direction of anthropomorphic hand such that five fingers will be used for playing piano. The finger actuations include actions of finger rotation, finger pressing, and finger lifting; time required for these 3 stages must meet the requirement of rhythm. The purpose of entertainment robot can be achieved by playing electric piano with robotic hand, and we hope this research can contribute to the development of domestic entertainment music playing robots.

  17. Comparative performance analysis of M-IMU/EMG and voice user interfaces for assistive robots.

    Science.gov (United States)

    Laureiti, Clemente; Cordella, Francesca; di Luzio, Francesco Scotto; Saccucci, Stefano; Davalli, Angelo; Sacchetti, Rinaldo; Zollo, Loredana

    2017-07-01

    People with a high level of disability experience great difficulties to perform activities of daily living and resort to their residual motor functions in order to operate assistive devices. The commercially available interfaces used to control assistive manipulators are typically based on joysticks and can be used only by subjects with upper-limb residual mobilities. Many other solutions can be found in the literature, based on the use of multiple sensory systems for detecting the human motion intention and state. Some of them require a high cognitive workload for the user. Some others are more intuitive and easy to use but have not been widely investigated in terms of usability and user acceptance. The objective of this work is to propose an intuitive and robust user interface for assistive robots, not obtrusive for the user and easily adaptable for subjects with different levels of disability. The proposed user interface is based on the combination of M-IMU and EMG for the continuous control of an arm-hand robotic system by means of M-IMUs. The system has been experimentally validated and compared to a standard voice interface. Sixteen healthy subjects volunteered to participate in the study: 8 subjects used the combined M-IMU/EMG robot control, and 8 subjects used the voice control. The arm-hand robotic system made of the KUKA LWR 4+ and the IH2 Azzurra hand was controlled to accomplish the daily living task of drinking. Performance indices and evaluation scales were adopted to assess performance of the two interfaces.

  18. Review of surgical robotics user interface: what is the best way to control robotic surgery?

    Science.gov (United States)

    Simorov, Anton; Otte, R Stephen; Kopietz, Courtni M; Oleynikov, Dmitry

    2012-08-01

    As surgical robots begin to occupy a larger place in operating rooms around the world, continued innovation is necessary to improve our outcomes. A comprehensive review of current surgical robotic user interfaces was performed to describe the modern surgical platforms, identify the benefits, and address the issues of feedback and limitations of visualization. Most robots currently used in surgery employ a master/slave relationship, with the surgeon seated at a work-console, manipulating the master system and visualizing the operation on a video screen. Although enormous strides have been made to advance current technology to the point of clinical use, limitations still exist. A lack of haptic feedback to the surgeon and the inability of the surgeon to be stationed at the operating table are the most notable examples. The future of robotic surgery sees a marked increase in the visualization technologies used in the operating room, as well as in the robots' abilities to convey haptic feedback to the surgeon. This will allow unparalleled sensation for the surgeon and almost eliminate inadvertent tissue contact and injury. A novel design for a user interface will allow the surgeon to have access to the patient bedside, remaining sterile throughout the procedure, employ a head-mounted three-dimensional visualization system, and allow the most intuitive master manipulation of the slave robot to date.

  19. Neuro-robotics from brain machine interfaces to rehabilitation robotics

    CERN Document Server

    Artemiadis

    2014-01-01

    Neuro-robotics is one of the most multidisciplinary fields of the last decades, fusing information and knowledge from neuroscience, engineering and computer science. This book focuses on the results from the strategic alliance between Neuroscience and Robotics that help the scientific community to better understand the brain as well as design robotic devices and algorithms for interfacing humans and robots. The first part of the book introduces the idea of neuro-robotics, by presenting state-of-the-art bio-inspired devices. The second part of the book focuses on human-machine interfaces for pe

  20. An EMG Interface for the Control of Motion and Compliance of a Supernumerary Robotic Finger

    Science.gov (United States)

    Hussain, Irfan; Spagnoletti, Giovanni; Salvietti, Gionata; Prattichizzo, Domenico

    2016-01-01

    In this paper, we propose a novel electromyographic (EMG) control interface to control motion and joints compliance of a supernumerary robotic finger. The supernumerary robotic fingers are a recently introduced class of wearable robotics that provides users additional robotic limbs in order to compensate or augment the existing abilities of natural limbs without substituting them. Since supernumerary robotic fingers are supposed to closely interact and perform actions in synergy with the human limbs, the control principles of extra finger should have similar behavior as human’s ones including the ability of regulating the compliance. So that, it is important to propose a control interface and to consider the actuators and sensing capabilities of the robotic extra finger compatible to implement stiffness regulation control techniques. We propose EMG interface and a control approach to regulate the compliance of the device through servo actuators. In particular, we use a commercial EMG armband for gesture recognition to be associated with the motion control of the robotic device and surface one channel EMG electrodes interface to regulate the compliance of the robotic device. We also present an updated version of a robotic extra finger where the adduction/abduction motion is realized through ball bearing and spur gears mechanism. We have validated the proposed interface with two sets of experiments related to compensation and augmentation. In the first set of experiments, different bimanual tasks have been performed with the help of the robotic device and simulating a paretic hand since this novel wearable system can be used to compensate the missing grasping abilities in chronic stroke patients. In the second set, the robotic extra finger is used to enlarge the workspace and manipulation capability of healthy hands. In both sets, the same EMG control interface has been used. The obtained results demonstrate that the proposed control interface is intuitive and can

  1. Investigation of human-robot interface performance in household environments

    Science.gov (United States)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  2. Development of anthropomorphic robotic hand driven by Pneumatic Artificial Muscles for robotic applications

    Science.gov (United States)

    Farag, Mohannad; Zainul Azlan, Norsinnira; Hayyan Alsibai, Mohammed

    2018-04-01

    This paper presents the design and fabrication of a three-fingered anthropomorphic robotic hand. The fingers are driven by tendons and actuated by human muscle-like actuators known as Pneumatic Artificial Muscle (PAM). The proposed design allows the actuators to be mounted outside the hand where each finger can be driven by one PAM actuator and six indirectly interlinked tendons. With this design, the three-fingered hand has a compact size and a lightweight with a mass of 150.25 grams imitating the human being hand in terms of size and weight. The hand also successfully grasped objects with different shapes and weights up to 500 g. Even though the number of PAM actuators equals the number of Degrees of Freedom (DOF), the design guarantees driving of three joints by only one actuator reducing the number of required actuators from 3 to 1. Therefore, this hand is suitable for researches of robotic applications in terms of design, cost and ability to be equipped with several types of sensors.

  3. Integrated multi-sensory control of space robot hand

    Science.gov (United States)

    Bejczy, A. K.; Kan, E. P.; Killion, R. R.

    1985-01-01

    Dexterous manipulation of a robot hand requires the use of multiple sensors integrated into the mechanical hand under distributed microcomputer control. Where space applications such as construction, assembly, servicing and repair tasks are desired of smart robot arms and robot hands, several critical drives influence the design, engineering and integration of such an electromechanical hand. This paper describes a smart robot hand developed at the Jet Propulsion Laboratory for experimental use and evaluation with the Protoflight Manipulator Arm (PFMA) at the Marshall Space Flight Center (MSFC).

  4. Soft brain-machine interfaces for assistive robotics: A novel control approach.

    Science.gov (United States)

    Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash

    2017-07-01

    Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.

  5. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand.

    Science.gov (United States)

    Kent, Benjamin A; Engeberg, Erik D

    2014-11-07

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques.

  6. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand

    International Nuclear Information System (INIS)

    Kent, Benjamin A; Engeberg, Erik D

    2014-01-01

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques. (paper)

  7. Bio-inspired grasp control in a robotic hand with massive sensorial input.

    Science.gov (United States)

    Ascari, Luca; Bertocchi, Ulisse; Corradi, Paolo; Laschi, Cecilia; Dario, Paolo

    2009-02-01

    The capability of grasping and lifting an object in a suitable, stable and controlled way is an outstanding feature for a robot, and thus far, one of the major problems to be solved in robotics. No robotic tools able to perform an advanced control of the grasp as, for instance, the human hand does, have been demonstrated to date. Due to its capital importance in science and in many applications, namely from biomedics to manufacturing, the issue has been matter of deep scientific investigations in both the field of neurophysiology and robotics. While the former is contributing with a profound understanding of the dynamics of real-time control of the slippage and grasp force in the human hand, the latter tries more and more to reproduce, or take inspiration by, the nature's approach, by means of hardware and software technology. On this regard, one of the major constraints robotics has to overcome is the real-time processing of a large amounts of data generated by the tactile sensors while grasping, which poses serious problems to the available computational power. In this paper a bio-inspired approach to tactile data processing has been followed in order to design and test a hardware-software robotic architecture that works on the parallel processing of a large amount of tactile sensing signals. The working principle of the architecture bases on the cellular nonlinear/neural network (CNN) paradigm, while using both hand shape and spatial-temporal features obtained from an array of microfabricated force sensors, in order to control the sensory-motor coordination of the robotic system. Prototypical grasping tasks were selected to measure the system performances applied to a computer-interfaced robotic hand. Successful grasps of several objects, completely unknown to the robot, e.g. soft and deformable objects like plastic bottles, soft balls, and Japanese tofu, have been demonstrated.

  8. Hand function recovery in chronic stroke with HEXORR robotic training: A case series.

    Science.gov (United States)

    Godfrey, Sasha Blue; Schabowsky, Christopher N; Holley, Rahsaan J; Lum, Peter S

    2010-01-01

    After a stroke, many survivors have impaired motor function. Robotic rehabilitation techniques have emerged to provide a repetitive, activity-based therapy at potentially lower cost than conventional methods. Many patients exhibit intrinsic resistance to hand extension in the form of spasticity and/or hypertonia. We have developed a therapy program using the Hand Exoskeleton Rehabilitation Robot (HEXORR) that is capable of compensating for tone to assist patients in opening the paretic hand. The system can move the user's hand, assist movement, allow free movement, or restrict movement to allow static force production. These options combine with an interactive virtual reality game to enhance user motivation. Four chronic stroke subjects received 18 sessions of robot therapy as well as pre and post evaluation sessions. All subjects showed at least modest gains in active finger range of motion (ROM) measured in the robot, and all but one subject had gains in active thumb ROM. Most of these gains carried over to ROM gains outside of the robot. The clinical measures (Fugl-Meyer, Box-and-Blocks) showed clear improvements in two subjects and mixed results in two subjects. Overall, the robot therapy was well received by subjects and shows promising results. We conclude HEXORR therapy is best suited for patients with mild-moderate tone and at least minimal extension.

  9. Robotic Hand

    Science.gov (United States)

    1993-01-01

    The Omni-Hand was developed by Ross-Hime Designs, Inc. for Marshall Space Flight Center (MSFC) under a Small Business Innovation Research (SBIR) contract. The multiple digit hand has an opposable thumb and a flexible wrist. Electric muscles called Minnacs power wrist joints and the interchangeable digits. Two hands have been delivered to NASA for evaluation for potential use on space missions and the unit is commercially available for applications like hazardous materials handling and manufacturing automation. Previous SBIR contracts resulted in the Omni-Wrist and Omni-Wrist II robotic systems, which are commercially available for spray painting, sealing, ultrasonic testing, as well as other uses.

  10. New frontiers in the rubber hand experiment: when a robotic hand becomes one's own.

    Science.gov (United States)

    Caspar, Emilie A; De Beir, Albert; Magalhaes De Saldanha Da Gama, Pedro A; Yernaux, Florence; Cleeremans, Axel; Vanderborght, Bram

    2015-09-01

    The rubber hand illusion is an experimental paradigm in which participants consider a fake hand to be part of their body. This paradigm has been used in many domains of psychology (i.e., research on pain, body ownership, agency) and is of clinical importance. The classic rubber hand paradigm nevertheless suffers from limitations, such as the absence of active motion or the reliance on approximate measurements, which makes strict experimental conditions difficult to obtain. Here, we report on the development of a novel technology-a robotic, user- and computer-controllable hand-that addresses many of the limitations associated with the classic rubber hand paradigm. Because participants can actively control the robotic hand, the device affords higher realism and authenticity. Our robotic hand has a comparatively low cost and opens up novel and innovative methods. In order to validate the robotic hand, we have carried out three experiments. The first two studies were based on previous research using the rubber hand, while the third was specific to the robotic hand. We measured both sense of agency and ownership. Overall, results show that participants experienced a "robotic hand illusion" in the baseline conditions. Furthermore, we also replicated previous results about agency and ownership.

  11. Approaching human performance the functionality-driven Awiwi robot hand

    CERN Document Server

    Grebenstein, Markus

    2014-01-01

    Humanoid robotics have made remarkable progress since the dawn of robotics. So why don't we have humanoid robot assistants in day-to-day life yet? This book analyzes the keys to building a successful humanoid robot for field robotics, where collisions become an unavoidable part of the game. The author argues that the design goal should be real anthropomorphism, as opposed to mere human-like appearance. He deduces three major characteristics to aim for when designing a humanoid robot, particularly robot hands: _ Robustness against impacts _ Fast dynamics _ Human-like grasping and manipulation performance   Instead of blindly copying human anatomy, this book opts for a holistic design me-tho-do-lo-gy. It analyzes human hands and existing robot hands to elucidate the important functionalities that are the building blocks toward these necessary characteristics.They are the keys to designing an anthropomorphic robot hand, as illustrated in the high performance anthropomorphic Awiwi Hand presented in this book.  ...

  12. Visual and tactile interfaces for bi-directional human robot communication

    Science.gov (United States)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  13. Intelligent, self-contained robotic hand

    Science.gov (United States)

    Krutik, Vitaliy; Doo, Burt; Townsend, William T.; Hauptman, Traveler; Crowell, Adam; Zenowich, Brian; Lawson, John

    2007-01-30

    A robotic device has a base and at least one finger having at least two links that are connected in series on rotary joints with at least two degrees of freedom. A brushless motor and an associated controller are located at each joint to produce a rotational movement of a link. Wires for electrical power and communication serially connect the controllers in a distributed control network. A network operating controller coordinates the operation of the network, including power distribution. At least one, but more typically two to five, wires interconnect all the controllers through one or more joints. Motor sensors and external world sensors monitor operating parameters of the robotic hand. The electrical signal output of the sensors can be input anywhere on the distributed control network. V-grooves on the robotic hand locate objects precisely and assist in gripping. The hand is sealed, immersible and has electrical connections through the rotary joints for anodizing in a single dunk without masking. In various forms, this intelligent, self-contained, dexterous hand, or combinations of such hands, can perform a wide variety of object gripping and manipulating tasks, as well as locomotion and combinations of locomotion and gripping.

  14. Robotic Hand with Flexible Fingers for Grasping Cylindrical Objects

    OpenAIRE

    柴田, 瑞穂

    2015-01-01

    In this manuscript, a robotic hand for grasping a cylindrical object is proposed. This robotic hand has flexible fingers that can hold a cylindrical object during moving. We introduce a grasping strategy for a cylindrical object in terms of state transition graph. In this strategy the robotic hand picks up the cylindrical object utilizing a suction device before the hand grasp the object. We also design the flexible fingers; then, we investigate the validity of this robotic hand via several e...

  15. Robotic Assistance by Impedance Compensation for Hand Movements While Manual Welding.

    Science.gov (United States)

    Erden, Mustafa Suphi; Billard, Aude

    2016-11-01

    In this paper, we present a robotic assistance scheme which allows for impedance compensation with stiffness, damping, and mass parameters for hand manipulation tasks and we apply it to manual welding. The impedance compensation does not assume a preprogrammed hand trajectory. Rather, the intention of the human for the hand movement is estimated in real time using a smooth Kalman filter. The movement is restricted by compensatory virtual impedance in the directions perpendicular to the estimated direction of movement. With airbrush painting experiments, we test three sets of values for the impedance parameters as inspired from impedance measurements with manual welding. We apply the best of the tested sets for assistance in manual welding and perform welding experiments with professional and novice welders. We contrast three conditions: 1) welding with the robot's assistance; 2) with the robot when the robot is passive; and 3) welding without the robot. We demonstrate the effectiveness of the assistance through quantitative measures of both task performance and perceived user's satisfaction. The performance of both the novice and professional welders improves significantly with robotic assistance compared to welding with a passive robot. The assessment of user satisfaction shows that all novice and most professional welders appreciate the robotic assistance as it suppresses the tremors in the directions perpendicular to the movement for welding.

  16. Controller design for Robotic hand through Electroencephalogram

    OpenAIRE

    Pandelidis P.; Kiriazis N.; Orgianelis K.; Koulios N.

    2016-01-01

    - This paper deals with the designing, the construction and the control of a robotic hand via an electroencephalogram sensor. First a robotic device that is able to mimic a real human hand is constructed. A PID controller is designed in order to improve the performance of the robotic arm for grabbing objects. Furthermore, a novel design approach is presented for controlling the motion of the robotic arm using signals produced from an innovative electroencephalogram sensor that detects the con...

  17. Intelligent computational control of multi-fingered dexterous robotic hand

    OpenAIRE

    Chen, Disi; Li, Gongfa; Jiang, Guozhang; Fang, Yinfeng; Ju, Zhaojie; Liu, Honghai

    2015-01-01

    We discuss the intelligent computational control theory and introduce the hardware structure of HIT/DLR II dexterous robotic hand, which is the typical dexterous robotic hand. We show that how DSP or FPGA controller can be used in the dexterous robotic hand. A popular intelligent dexterous robotic hand control system, which named Electromyography (EMG) control is investigated. We introduced some mathematical algorithms in EMG controlling, such as Gauss mixture model (GMM), artificial neural n...

  18. Human-machine Interface for Presentation Robot

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Ondroušek, V.

    2012-01-01

    Roč. 6, č. 2 (2012), s. 17-21 ISSN 1897-8649 Institutional research plan: CEZ:AV0Z20760514 Keywords : human-robot interface * mobile robot * presentation robot Subject RIV: JD - Computer Applications, Robotics

  19. Controller design for Robotic hand through Electroencephalogram

    Directory of Open Access Journals (Sweden)

    Pandelidis P.

    2016-01-01

    Full Text Available - This paper deals with the designing, the construction and the control of a robotic hand via an electroencephalogram sensor. First a robotic device that is able to mimic a real human hand is constructed. A PID controller is designed in order to improve the performance of the robotic arm for grabbing objects. Furthermore, a novel design approach is presented for controlling the motion of the robotic arm using signals produced from an innovative electroencephalogram sensor that detects the concentration of the brain

  20. User Interface Aspects of a Human-Hand Simulation System

    Directory of Open Access Journals (Sweden)

    Beifang Yi

    2005-10-01

    Full Text Available This paper describes the user interface design for a human-hand simulation system, a virtual environment that produces ground truth data (life-like human hand gestures and animations and provides visualization support for experiments on computer vision-based hand pose estimation and tracking. The system allows users to save time in data generation and easily create any hand gestures. We have designed and implemented this user interface with the consideration of usability goals and software engineering issues.

  1. Robot hand tackles jobs in hazardous areas

    International Nuclear Information System (INIS)

    Simms, Mark; Crowder, Richard.

    1989-01-01

    A robot hand and arm designed to mimic the operation of its human counterpart, developed at the University of Southampton for use in a standard industrial glovebox, is described. It was specifically designed for use in a radioactive environment moving high dosage components around. As dosage limits go down, there is a legal requirement to remove people from that environment. The nine-axis arm is for use in a glove designed for a human hand. Drive for the motors used to power the hand is from three-phase MOSFET inventor cards, the switching pattern controlled by the Hall effect communication sensors integral to each motor. The computer software for the arm allows the hand to be positioned using a joystick on a control box, with three levels of command for grip, pinch and touch. (author)

  2. Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents

    Science.gov (United States)

    2016-07-27

    SECURITY CLASSIFICATION OF: Brain Computer Interfaces (BCIs) show great potential in allowing humans to interact with computational environments in a...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot...published in peer-reviewed journals: Number of Papers published in non peer-reviewed journals: Final Report: Brain Computer Interfaces for Enhanced

  3. Brain-state dependent robotic reaching movement with a multi-joint arm exoskeleton: combining brain-machine interfacing and robotic rehabilitation

    Directory of Open Access Journals (Sweden)

    Daniel eBrauchle

    2015-10-01

    Full Text Available While robot-assisted arm and hand training after stroke allows for intensive task-oriented practice, it has provided only limited additional benefit over dose-matched physiotherapy up to now. These rehabilitation devices are possibly too supportive during the exercises. Neurophysiological signals might be one way of avoiding slacking and providing robotic support only when the brain is particularly responsive to peripheral input.We tested the feasibility of three-dimensional robotic assistance for reach-to-grasp movements with a multi-joint exoskeleton during motor imagery-related desynchronization of sensorimotor oscillations in the β-band only. We also registered task-related network changes of cortical functional connectivity by electroencephalography via the imaginary part of the coherence function.Healthy subjects and stroke survivors showed similar patterns – but different aptitudes – of controlling the robotic movement. All participants in this pilot study with nine healthy subjects and two stroke patients achieved their maximum performance during the early stages of the task. Robotic control was significantly higher and less variable when proprioceptive feedback was provided in addition to visual feedback, i.e. when the orthosis was actually attached to the subject’s arm during the task. A distributed cortical network of task-related coherent activity in the θ-band showed significant differences between healthy subjects and stroke patients as well as between early and late periods of the task.Brain-robot interfaces may successfully link three-dimensional robotic training to the participants’ efforts and allow for task-oriented practice of activities of daily living with a physiologically controlled multi-joint exoskeleton. Changes of cortical physiology during the task might also help to make subject-specific adjustments of task difficulty and guide adjunct interventions to facilitate motor learning for functional restoration.

  4. Modeling and control of an anthropomorphic robotic hand

    OpenAIRE

    Bensalah, Choukri

    2016-01-01

    Mención Europea en el título de doctor This thesis presents methods and tools for enabling the successful use of robotic hands. For highly dexterous and/or anthropomorphic robotic hands, these methods have to share some common goals, such as overcoming the potential complexity of the mechanical design and the ability of performing accurate tasks with low and efficient computational cost. A prerequisite for dexterity is to increase the workspace of the robotic hand. For th...

  5. A hand-held robotic device for peripheral intravenous catheterization.

    Science.gov (United States)

    Cheng, Zhuoqi; Davies, Brian L; Caldwell, Darwin G; Barresi, Giacinto; Xu, Qinqi; Mattos, Leonardo S

    2017-12-01

    Intravenous catheterization is frequently required for numerous medical treatments. However, this process is characterized by a high failure rate, especially when performed on difficult patients such as newborns and infants. Very young patients have small veins, and that increases the chances of accidentally puncturing the catheterization needle directly through them. In this article, we present the design, development and experimental evaluation of a novel hand-held robotic device for improving the process of peripheral intravenous catheterization by facilitating the needle insertion procedure. To our knowledge, this design is the first hand-held robotic device for assisting in the catheterization insertion task. Compared to the other available technologies, it has several unique advantages such as being compact, low-cost and able to reliably detect venipuncture. The system is equipped with an electrical impedance sensor at the tip of the catheterization needle, which provides real-time measurements used to supervise and control the catheter insertion process. This allows the robotic system to precisely position the needle within the lumen of the target vein, leading to enhanced catheterization success rate. Experiments conducted to evaluate the device demonstrated that it is also effective to deskill the task. Naïve subjects achieved an average catheterization success rate of 88% on a 1.5 mm phantom vessel with the robotic device versus 12% with the traditional unassisted system. The results of this work prove the feasibility of a hand-held assistive robotic device for intravenous catheterization and show that such device has the potential to greatly improve the success rate of these difficult operations.

  6. A soft robotic exomusculature glove with integrated sEMG sensing for hand rehabilitation.

    Science.gov (United States)

    Delph, Michael A; Fischer, Sarah A; Gauthier, Phillip W; Luna, Carlos H Martinez; Clancy, Edward A; Fischer, Gregory S

    2013-06-01

    Stroke affects 750,000 people annually, and 80% of stroke survivors are left with weakened limbs and hands. Repetitive hand movement is often used as a rehabilitation technique in order to regain hand movement and strength. In order to facilitate this rehabilitation, a robotic glove was designed to aid in the movement and coordination of gripping exercises. This glove utilizes a cable system to open and close a patients hand. The cables are actuated by servomotors, mounted in a backpack weighing 13.2 lbs including battery power sources. The glove can be controlled in terms of finger position and grip force through switch interface, software program, or surface myoelectric (sEMG) signal. The primary control modes of the system provide: active assistance, active resistance and a preprogrammed mode. This project developed a working prototype of the rehabilitative robotic glove which actuates the fingers over a full range of motion across one degree-of-freedom, and is capable of generating a maximum 15N grip force.

  7. Human Hand Motion Analysis and Synthesis of Optimal Power Grasps for a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Francesca Cordella

    2014-03-01

    Full Text Available Biologically inspired robotic systems can find important applications in biomedical robotics, since studying and replicating human behaviour can provide new insights into motor recovery, functional substitution and human-robot interaction. The analysis of human hand motion is essential for collecting information about human hand movements useful for generalizing reaching and grasping actions on a robotic system. This paper focuses on the definition and extraction of quantitative indicators for describing optimal hand grasping postures and replicating them on an anthropomorphic robotic hand. A motion analysis has been carried out on six healthy human subjects performing a transverse volar grasp. The extracted indicators point to invariant grasping behaviours between the involved subjects, thus providing some constraints for identifying the optimal grasping configuration. Hence, an optimization algorithm based on the Nelder-Mead simplex method has been developed for determining the optimal grasp configuration of a robotic hand, grounded on the aforementioned constraints. It is characterized by a reduced computational cost. The grasp stability has been tested by introducing a quality index that satisfies the form-closure property. The grasping strategy has been validated by means of simulation tests and experimental trials on an arm-hand robotic system. The obtained results have shown the effectiveness of the extracted indicators to reduce the non-linear optimization problem complexity and lead to the synthesis of a grasping posture able to replicate the human behaviour while ensuring grasp stability. The experimental results have also highlighted the limitations of the adopted robotic platform (mainly due to the mechanical structure to achieve the optimal grasp configuration.

  8. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    OpenAIRE

    BIROUAS Flaviu Ionut; NILGESZ Arnold

    2016-01-01

    This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  9. Development of Pneumatic Robot Hand and Construction of Master-Slave System

    Science.gov (United States)

    Tsujiuchi, Nobutaka; Koizumi, Takayuki; Nishino, Shinya; Komatsubara, Hiroyuki; Kudawara, Tatsuwo; Hirano, Masanori

    Recently, research and development has focused on robots that work in place of people. It is necessary for robots to perform the same flexible motions as people. Additionally, such robots need to incorporate high-level safety features in order not to injure people. For creation of such robots, we need to develop a robot hand that functions like a human hand. At the same time, this type of robot hand can be used as an artificial hand. Here, we present artificial muscle-type pneumatic actuators as the driving source of a robot hand that is both safe and flexible. Some development of robot hands using pneumatic actuators has already taken place. But, until now, when a pneumatic actuator is used, a big compressor is needed. So, the driving system also needs to be big; enlargement of the driving system is a major problem. Consequently, in this research, we develop a low-pressure, low-volume pneumatic actuator for driving a robot hand that works flexibly and safely on the assumption that it will be in contact with people. We develop a five-fingered robot hand with pneumatic actuators. And, we construct a master-slave system to enable the robot hand to perform the same operations as a human hand. We make a 1-link arm that has one degree of freedom using a pneumatic actuator, and construct a control system for the 1-link arm and verify its control performance.

  10. Micro flexible robot hand using electro-conjugate fluid

    Science.gov (United States)

    Ueno, S.; Takemura, K.; Yokota, S.; Edamura, K.

    2013-12-01

    An electro-conjugate fluid (ECF) is a kind of functional fluid, which produces a flow (ECF flow) when subjected to high DC voltage. Since it only requires a tiny electrode pair in micrometer size in order to generate the ECF flow, the ECF is a promising micro fluid pressure source. This study proposes a novel micro robot hand using the ECF. The robot hand is mainly composed of five flexible fingers and an ECF flow generator. The flexible finger is made of silicone rubber having several chambers in series along its axis. When the chambers are depressurized, the chambers deflate resulting in making the actuator bend. On the other hand, the ECF flow generator has a needle-ring electrode pair inside. When putting the ECF flow generator into the ECF and applying voltage of 6.0 kV to the electrode pair, we can obtain the pressure of 33.1 kPa. Using the components mentioned above, we developed the ECF robot hand. The height, the width and the mass of the robot hand are 45 mm, 40 mm and 5.2 g, respectively. Since the actuator is flexible, the robot hand can grasp various objects with various shapes without complex controller.

  11. Playte, a tangible interface for engaging human-robot interaction

    DEFF Research Database (Denmark)

    Christensen, David Johan; Fogh, Rune; Lund, Henrik Hautop

    2014-01-01

    This paper describes a tangible interface, Playte, designed for children animating interactive robots. The system supports physical manipulation of behaviors represented by LEGO bricks and allows the user to record and train their own new behaviors. Our objective is to explore several modes of in...

  12. Motor Imagery-Based Brain-Computer Interface Coupled to a Robotic Hand Orthosis Aimed for Neurorehabilitation of Stroke Patients

    Directory of Open Access Journals (Sweden)

    Jessica Cantillo-Negrete

    2018-01-01

    Full Text Available Motor imagery-based brain-computer interfaces (BCI have shown potential for the rehabilitation of stroke patients; however, low performance has restricted their application in clinical environments. Therefore, this work presents the implementation of a BCI system, coupled to a robotic hand orthosis and driven by hand motor imagery of healthy subjects and the paralysed hand of stroke patients. A novel processing stage was designed using a bank of temporal filters, the common spatial pattern algorithm for feature extraction and particle swarm optimisation for feature selection. Offline tests were performed for testing the proposed processing stage, and results were compared with those computed with common spatial patterns. Afterwards, online tests with healthy subjects were performed in which the orthosis was activated by the system. Stroke patients’ average performance was 74.1 ± 11%. For 4 out of 6 patients, the proposed method showed a statistically significant higher performance than the common spatial pattern method. Healthy subjects’ average offline and online performances were of 76.2 ± 7.6% and 70 ± 6.7, respectively. For 3 out of 8 healthy subjects, the proposed method showed a statistically significant higher performance than the common spatial pattern method. System’s performance showed that it has a potential to be used for hand rehabilitation of stroke patients.

  13. Virtual hand: a 3D tactile interface to virtual environments

    Science.gov (United States)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  14. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    Directory of Open Access Journals (Sweden)

    BIROUAS Flaviu Ionut

    2016-11-01

    Full Text Available This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  15. Sea-Shore Interface Robotic Design

    Science.gov (United States)

    2014-06-01

    for various beachfront terrains. Robotics , Robot , Amphibious Vehicles, Mobility, Surf-Zone, Autonomous, Wheg, exoskeleton Unclassified Unclassified...controllers and to showcase the benefits of a modular construction. The result was an exoskeleton design with modular components, see Figure 2.1. Figure 2.1...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS SEA-SHORE INTERFACE ROBOTIC DESIGN by Timothy L. Bell June 2014 Thesis Advisor: Richard Harkins

  16. Design and control of five fingered under-actuated robotic hand

    Science.gov (United States)

    Sahoo, Biswojit; Parida, Pramod Kumar

    2018-04-01

    Now a day's research regarding humanoid robots and its application in different fields (industry, household, rehabilitation and exploratory) is going on entire the globe. Among which a challenging topic is to design a dexterous robotic hand which not only can perform as a hand of a robot but also can be used in re habilitation. The basic key concern is a dexterous robot hand which can be able to mimic the function of biological hand to perform different operations. This thesis work is regarding design and control of a under-actuated robotic hand consisting of four under actuated fingers (index finger, middle finger, little finger and ring finger ) , a thumb and a dexterous palm which can copy the motions and grasp type of human hand which having 21degrees of freedom instead of 25Degree Of Freedom.

  17. An Analogue Interface for Musical Robots

    OpenAIRE

    Long, Jason; Kapur, Ajay; Carnegie, Dale

    2016-01-01

    The majority of musical robotics performances, projects and installations utilise microcontroller hardware to digitally interface the robotic instruments with sequencer software and other musical controllers, often via a personal computer. While in many ways digital interfacing offers considerable power and flexibility, digital protocols, equipment and audio workstations often tend to suggest particular music-making work-flows and have resolution and timing limitations. This paper describes t...

  18. Implement of Shape Memory Alloy Actuators in a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Daniel Amariei

    2006-10-01

    Full Text Available This paper was conceived to present the ideology of utilizing advanced actuators to design and develop innovative, lightweight, powerful, compact, and as much as possible dexterous robotic hands. The key to satisfying these objectives is the use of Shape Memory Alloys (SMAs to power the joints of the robotic hand. The mechanical design of a dexterous robotic hand, which utilizes non-classical types of actuation and information obtained from the study of biological systems, is presented in this paper. The type of robotic hand described in this paper will be utilized for applications requiring low weight, power, compactness, and dexterity.

  19. The da vinci robot system eliminates multispecialty surgical trainees' hand dominance in open and robotic surgical settings.

    Science.gov (United States)

    Badalato, Gina M; Shapiro, Edan; Rothberg, Michael B; Bergman, Ari; RoyChoudhury, Arindam; Korets, Ruslan; Patel, Trushar; Badani, Ketan K

    2014-01-01

    Handedness, or the inherent dominance of one hand's dexterity over the other's, is a factor in open surgery but has an unknown importance in robot-assisted surgery. We sought to examine whether the robotic surgery platform could eliminate the effect of inherent hand preference. Residents from the Urology and Obstetrics/Gynecology departments were enrolled. Ambidextrous and left-handed subjects were excluded. After completing a questionnaire, subjects performed three tasks modified from the Fundamentals of Laparoscopic Surgery curriculum. Tasks were performed by hand and then with the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, California). Participants were randomized to begin with using either the left or the right hand, and then switch. Left:right ratios were calculated from scores based on time to task completion. Linear regression analysis was used to determine the significance of the impact of surgical technique on hand dominance. Ten subjects were enrolled. The mean difference in raw score performance between the right and left hands was 12.5 seconds for open tasks and 8 seconds for robotic tasks (Probot tasks, respectively (Probotic and open approaches for raw time scores (Phand, prior robotic experience, and comfort level. These findings remain to be validated in larger cohorts. The robotic technique reduces hand dominance in surgical trainees across all task domains. This finding contributes to the known advantages of robotic surgery.

  20. Interface evaluation for soft robotic manipulators

    Science.gov (United States)

    Moore, Kristin S.; Rodes, William M.; Csencsits, Matthew A.; Kwoka, Martha J.; Gomer, Joshua A.; Pagano, Christopher C.

    2006-05-01

    The results of two usability experiments evaluating an interface for the operation of OctArm, a biologically inspired robotic arm modeled after an octopus tentacle, are reported. Due to the many degrees-of-freedom (DOF) for the operator to control, such 'continuum' robotic limbs provide unique challenges for human operators because they do not map intuitively. Two modes have been developed to control the arm and reduce the DOF under the explicit direction of the operator. In coupled velocity (CV) mode, a joystick controls changes in arm curvature. In end-effector (EE) mode, a joystick controls the arm by moving the position of an endpoint along a straight line. In Experiment 1, participants used the two modes to grasp objects placed at different locations in a virtual reality modeling language (VRML). Objective measures of performance and subjective preferences were recorded. Results revealed lower grasp times and a subjective preference for the CV mode. Recommendations for improving the interface included providing additional feedback and implementation of an error recovery function. In Experiment 2, only the CV mode was tested with improved training of participants and several changes to the interface. The error recovery function was implemented, allowing participants to reverse through previously attained positions. The mean time to complete the trials in the second usability test was reduced by more than 4 minutes compared with the first usability test, confirming the interface changes improved performance. The results of these tests will be incorporated into future versions of the arm and improve future usability tests.

  1. Human-robot skills transfer interfaces for a flexible surgical robot.

    Science.gov (United States)

    Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G

    2014-09-01

    In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Soft object deformation monitoring and learning for model-based robotic hand manipulation.

    Science.gov (United States)

    Cretu, Ana-Maria; Payeur, Pierre; Petriu, Emil M

    2012-06-01

    This paper discusses the design and implementation of a framework that automatically extracts and monitors the shape deformations of soft objects from a video sequence and maps them with force measurements with the goal of providing the necessary information to the controller of a robotic hand to ensure safe model-based deformable object manipulation. Measurements corresponding to the interaction force at the level of the fingertips and to the position of the fingertips of a three-finger robotic hand are associated with the contours of a deformed object tracked in a series of images using neural-network approaches. The resulting model captures the behavior of the object and is able to predict its behavior for previously unseen interactions without any assumption on the object's material. The availability of such models can contribute to the improvement of a robotic hand controller, therefore allowing more accurate and stable grasp while providing more elaborate manipulation capabilities for deformable objects. Experiments performed for different objects, made of various materials, reveal that the method accurately captures and predicts the object's shape deformation while the object is submitted to external forces applied by the robot fingers. The proposed method is also fast and insensitive to severe contour deformations, as well as to smooth changes in lighting, contrast, and background.

  3. Dataglove-based interface for impedance control of manipulators in cooperative human–robot environments

    International Nuclear Information System (INIS)

    Paredes-Madrid, L; Gonzalez de Santos, P

    2013-01-01

    A dataglove-based interface is presented for tracking the forces applied by the hand during contact tasks with a 6-degree-of-freedom (DOF) manipulator. The interface uses 11 force sensors carefully placed on the palm-side fabric of a 16 DOF dataglove. The force sensors use piezoresistive technology to measure the individual force components from the hand. Based on the dataglove measurements, these components are transformed and summed to assemble the resultant force vector. Finally, this force vector is translated into the manipulator frame using orientation measurements from an inertial measurement unit placed on the dorsal side of the dataglove. Static tests show that the dataglove-based interface can effectively measure the applied hand force, but there are inaccuracies in orientation and magnitude when compared to the load cell measurements used as the reference for error calculation. Promising results were achieved when controlling the 6 DOF manipulator based on the force readings acquired from the dataglove interface; the decoupled dynamics of the dataglove interface with respect to the robot structure yielded smooth force readings of the human intention that could be effectively used in the impedance control of the manipulator. (paper)

  4. An intention driven hand functions task training robotic system.

    Science.gov (United States)

    Tong, K Y; Ho, S K; Pang, P K; Hu, X L; Tam, W K; Fung, K L; Wei, X J; Chen, P N; Chen, M

    2010-01-01

    A novel design of a hand functions task training robotic system was developed for the stroke rehabilitation. It detects the intention of hand opening or hand closing from the stroke person using the electromyography (EMG) signals measured from the hemiplegic side. This training system consists of an embedded controller and a robotic hand module. Each hand robot has 5 individual finger assemblies capable to drive 2 degrees of freedom (DOFs) of each finger at the same time. Powered by the linear actuator, the finger assembly achieves 55 degree range of motion (ROM) at the metacarpophalangeal (MCP) joint and 65 degree range of motion (ROM) at the proximal interphalangeal (PIP) joint. Each finger assembly can also be adjusted to fit for different finger length. With this task training system, stroke subject can open and close their impaired hand using their own intention to carry out some of the daily living tasks.

  5. Integration of robotics and neuroscience beyond the hand: What kind of synergies?. Comment on "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands" by Marco Santello et al.

    Science.gov (United States)

    d'Avella, Andrea

    2016-07-01

    Santello et al. [1] review an impressive amount of work on the control of biological and artificial hands that demonstrates how the concept of synergies can lead to a successful integration of robotics and neuroscience. Is it possible to generalize the same approach to the control of biological and artificial limbs and bodies beyond the hand? The human hand synergies that appear most relevant for robotic hands are those defined at the kinematic level, i.e. postural synergies [2]. Postural synergies capture the geometric relations among the many joints of the hand and allow for a low dimensional characterization and synthesis of the static hand postures involved in grasping and manipulating a large set of objects. However, many other complex motor skills such as walking, reaching, throwing, and catching require controlling multi-articular time-varying trajectories rather than static postures. Dynamic control of biological and artificial limbs and bodies, especially when geometric and inertial parameters are uncertain and the joints are compliant, poses great challenges. What kind of synergies might simplify the dynamic control of motor skills involving upper and lower limbs as well as the whole body?

  6. Compact Dexterous Robotic Hand

    Science.gov (United States)

    Lovchik, Christopher Scott (Inventor); Diftler, Myron A. (Inventor)

    2001-01-01

    A compact robotic hand includes a palm housing, a wrist section, and a forearm section. The palm housing supports a plurality of fingers and one or more movable palm members that cooperate with the fingers to grasp and/or release an object. Each flexible finger comprises a plurality of hingedly connected segments, including a proximal segment pivotally connected to the palm housing. The proximal finger segment includes at least one groove defining first and second cam surfaces for engagement with a cable. A plurality of lead screw assemblies each carried by the palm housing are supplied with power from a flexible shaft rotated by an actuator and output linear motion to a cable move a finger. The cable is secured within a respective groove and enables each finger to move between an opened and closed position. A decoupling assembly pivotally connected to a proximal finger segment enables a cable connected thereto to control movement of an intermediate and distal finger segment independent of movement of the proximal finger segment. The dexterous robotic hand closely resembles the function of a human hand yet is light weight and capable of grasping both heavy and light objects with a high degree of precision.

  7. Embodied neurofeedback with an anthropomorphic robotic hand

    Science.gov (United States)

    Braun, Niclas; Emkes, Reiner; Thorne, Jeremy D.; Debener, Stefan

    2016-01-01

    Neurofeedback-guided motor imagery training (NF-MIT) has been suggested as a promising therapy for stroke-induced motor impairment. Whereas much NF-MIT research has aimed at signal processing optimization, the type of sensory feedback given to the participant has received less attention. Often the feedback signal is highly abstract and not inherently coupled to the mental act performed. In this study, we asked whether an embodied feedback signal is more efficient for neurofeedback operation than a non-embodiable feedback signal. Inspired by the rubber hand illusion, demonstrating that an artificial hand can be incorporated into one’s own body scheme, we used an anthropomorphic robotic hand to visually guide the participants’ motor imagery act and to deliver neurofeedback. Using two experimental manipulations, we investigated how a participant’s neurofeedback performance and subjective experience were influenced by the embodiability of the robotic hand, and by the neurofeedback signal’s validity. As pertains to embodiment, we found a promoting effect of robotic-hand embodiment in subjective, behavioral, electrophysiological and electrodermal measures. Regarding neurofeedback signal validity, we found some differences between real and sham neurofeedback in terms of subjective and electrodermal measures, but not in terms of behavioral and electrophysiological measures. This study motivates the further development of embodied feedback signals for NF-MIT. PMID:27869190

  8. Embodied neurofeedback with an anthropomorphic robotic hand.

    Science.gov (United States)

    Braun, Niclas; Emkes, Reiner; Thorne, Jeremy D; Debener, Stefan

    2016-11-21

    Neurofeedback-guided motor imagery training (NF-MIT) has been suggested as a promising therapy for stroke-induced motor impairment. Whereas much NF-MIT research has aimed at signal processing optimization, the type of sensory feedback given to the participant has received less attention. Often the feedback signal is highly abstract and not inherently coupled to the mental act performed. In this study, we asked whether an embodied feedback signal is more efficient for neurofeedback operation than a non-embodiable feedback signal. Inspired by the rubber hand illusion, demonstrating that an artificial hand can be incorporated into one's own body scheme, we used an anthropomorphic robotic hand to visually guide the participants' motor imagery act and to deliver neurofeedback. Using two experimental manipulations, we investigated how a participant's neurofeedback performance and subjective experience were influenced by the embodiability of the robotic hand, and by the neurofeedback signal's validity. As pertains to embodiment, we found a promoting effect of robotic-hand embodiment in subjective, behavioral, electrophysiological and electrodermal measures. Regarding neurofeedback signal validity, we found some differences between real and sham neurofeedback in terms of subjective and electrodermal measures, but not in terms of behavioral and electrophysiological measures. This study motivates the further development of embodied feedback signals for NF-MIT.

  9. A new approach of active compliance control via fuzzy logic control for multifingered robot hand

    Science.gov (United States)

    Jamil, M. F. A.; Jalani, J.; Ahmad, A.

    2016-07-01

    Safety is a vital issue in Human-Robot Interaction (HRI). In order to guarantee safety in HRI, a model reference impedance control can be a very useful approach introducing a compliant control. In particular, this paper establishes a fuzzy logic compliance control (i.e. active compliance control) to reduce impact and forces during physical interaction between humans/objects and robots. Exploiting a virtual mass-spring-damper system allows us to determine a desired compliant level by understanding the behavior of the model reference impedance control. The performance of fuzzy logic compliant control is tested in simulation for a robotic hand known as the RED Hand. The results show that the fuzzy logic is a feasible control approach, particularly to control position and to provide compliant control. In addition, the fuzzy logic control allows us to simplify the controller design process (i.e. avoid complex computation) when dealing with nonlinearities and uncertainties.

  10. Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2012-07-01

    Full Text Available This paper presents a real-time remote robot teleoperation method using markerless Kinect-based hand tracking. Using this tracking algorithm, the positions of index finger and thumb in 3D can be estimated by processing depth images from Kinect. The hand pose is used as a model to specify the pose of a real-time remote robot's end-effector. This method provides a way to send a whole task to a remote robot instead of sending limited motion commands like gesture-based approaches and this method has been tested in pick-and-place tasks.

  11. An Infant Development-inspired Approach to Robot Hand-eye Coordination

    Directory of Open Access Journals (Sweden)

    Fei Chao

    2014-02-01

    Full Text Available This paper presents a novel developmental learning approach for hand-eye coordination in an autonomous robotic system. Robotic hand-eye coordination plays an important role in dealing with real-time environments. Under the approach, infant developmental patterns are introduced to build our robot's learning system. The method works by first constructing a brain-like computational structure to control the robot, and then by using infant behavioural patterns to build a hand-eye coordination learning algorithm. This work is supported by an experimental evaluation, which shows that the control system is implemented simply, and that the learning approach provides fast and incremental learning of behavioural competence.

  12. Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands

    Science.gov (United States)

    Santello, Marco; Bianchi, Matteo; Gabiccini, Marco; Ricciardi, Emiliano; Salvietti, Gionata; Prattichizzo, Domenico; Ernst, Marc; Moscatelli, Alessandro; Jörntell, Henrik; Kappers, Astrid M. L.; Kyriakopoulos, Kostas; Albu-Schäffer, Alin; Castellini, Claudio; Bicchi, Antonio

    2016-07-01

    The term 'synergy' - from the Greek synergia - means 'working together'. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project ;The Hand Embodied; (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies.

  13. Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands

    Science.gov (United States)

    Santello, Marco; Bianchi, Matteo; Gabiccini, Marco; Ricciardi, Emiliano; Salvietti, Gionata; Prattichizzo, Domenico; Ernst, Marc; Moscatelli, Alessandro; Jörntell, Henrik; Kappers, Astrid M.L.; Kyriakopoulos, Kostas; Albu-Schäffer, Alin; Castellini, Claudio; Bicchi, Antonio

    2017-01-01

    The term ‘synergy’ – from the Greek synergia – means ‘working together’. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project “The Hand Embodied” (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies. PMID:26923030

  14. Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands.

    Science.gov (United States)

    Santello, Marco; Bianchi, Matteo; Gabiccini, Marco; Ricciardi, Emiliano; Salvietti, Gionata; Prattichizzo, Domenico; Ernst, Marc; Moscatelli, Alessandro; Jörntell, Henrik; Kappers, Astrid M L; Kyriakopoulos, Kostas; Albu-Schäffer, Alin; Castellini, Claudio; Bicchi, Antonio

    2016-07-01

    The term 'synergy' - from the Greek synergia - means 'working together'. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project "The Hand Embodied" (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. PEMBUATAN PROGRAM INTERFACE UNTUK PENGONTROLAN RV-M1

    Directory of Open Access Journals (Sweden)

    Endra Endra

    2007-10-01

    Full Text Available Article explores the making of interface of RV-M1 hand robot control that replaced the cosiprog program,a program that is able to help student in Mecatronica-1 Practice, and able to control the hand robot by localnetwork by two user or more. The used methods were literature study, and field study, that is design method. Theresearch result are control of hand robot on X,Y,Z axis and point to point, the use of local network to control thehand robot, save certain position, and use several user to control the robot.Keywords: interface program, robot, local network

  16. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  17. Permeation of limonene through disposable nitrile gloves using a dextrous robot hand.

    Science.gov (United States)

    Banaee, Sean; S Que Hee, Shane

    2017-03-28

    The purpose of this study was to investigate the permeation of the low-volatile solvent limonene through different disposable, unlined, unsupported, nitrile exam whole gloves (blue, purple, sterling, and lavender, from Kimberly-Clark). This study utilized a moving and static dextrous robot hand as part of a novel dynamic permeation system that allowed sampling at specific times. Quantitation of limonene in samples was based on capillary gas chromatography-mass spectrometry and the internal standard method (4-bromophenol). The average post-permeation thicknesses (before reconditioning) for all gloves for both the moving and static hand were more than 10% of the pre-permeation ones (P≤0.05), although this was not so on reconditioning. The standardized breakthrough times and steady-state permeation periods were similar for the blue, purple, and sterling gloves. Both methods had similar sensitivity. The lavender glove showed a higher permeation rate (0.490±0.031 μg/cm 2 /min) for the moving robotic hand compared to the non-moving hand (P≤0.05), this being ascribed to a thickness threshold. Permeation parameters for the static and dynamic robot hand models indicate that both methods have similar sensitivity in detecting the analyte during permeation and the blue, purple, and sterling gloves behave similarly during the permeation process whether moving or non-moving.

  18. Permeation of limonene through disposable nitrile gloves using a dextrous robot hand

    Science.gov (United States)

    Banaee, Sean; S Que Hee, Shane

    2017-01-01

    Objectives: The purpose of this study was to investigate the permeation of the low-volatile solvent limonene through different disposable, unlined, unsupported, nitrile exam whole gloves (blue, purple, sterling, and lavender, from Kimberly-Clark). Methods: This study utilized a moving and static dextrous robot hand as part of a novel dynamic permeation system that allowed sampling at specific times. Quantitation of limonene in samples was based on capillary gas chromatography-mass spectrometry and the internal standard method (4-bromophenol). Results: The average post-permeation thicknesses (before reconditioning) for all gloves for both the moving and static hand were more than 10% of the pre-permeation ones (P≤0.05), although this was not so on reconditioning. The standardized breakthrough times and steady-state permeation periods were similar for the blue, purple, and sterling gloves. Both methods had similar sensitivity. The lavender glove showed a higher permeation rate (0.490±0.031 μg/cm2/min) for the moving robotic hand compared to the non-moving hand (P≤0.05), this being ascribed to a thickness threshold. Conclusions: Permeation parameters for the static and dynamic robot hand models indicate that both methods have similar sensitivity in detecting the analyte during permeation and the blue, purple, and sterling gloves behave similarly during the permeation process whether moving or non-moving. PMID:28111415

  19. Design of an eye-in-hand sensing and servo control framework for harvesting robotics in dense vegetation

    NARCIS (Netherlands)

    Barth, Ruud; Hemming, Jochen; Henten, van E.J.

    2016-01-01

    A modular software framework design that allows flexible implementation of eye-in-hand sensing and motion control for agricultural robotics in dense vegetation is reported. Harvesting robots in cultivars with dense vegetation require multiple viewpoints and on-line trajectory adjustments in order

  20. Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm

    OpenAIRE

    Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W

    2014-01-01

    Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and...

  1. Internet remote control interface for a multipurpose robotic arm

    Directory of Open Access Journals (Sweden)

    Matthew W. Dunnigan

    2008-11-01

    Full Text Available This paper presents an Internet remote control interface for a MITSUBISHI PA10-6CE manipulator established for the purpose of the ROBOT museum exhibition during spring and summer 2004. The robotic manipulator is a part of the Intelligent Robotic Systems Laboratory at Heriot ? Watt University, which has been established to work on dynamic and kinematic aspects of manipulator control in the presence of environmental disturbances. The laboratory has been enriched by a simple vision system consisting of three web-cameras to broadcast the live images of the robots over the Internet. The Interface comprises of the TCP/IP server providing command parsing and execution using the open controller architecture of the manipulator and a client Java applet web-site providing a simple robot control interface.

  2. Novel Approach to Control of Robotic Hand Using Flex Sensors

    Directory of Open Access Journals (Sweden)

    Sandesh R.S

    2014-05-01

    Full Text Available This paper discuss about novel design approach to control of a robotic hand using flex sensors which indicates a biomechatronic multi fingered robotic hand. This robotic hand consists of base unit, upper arm, lower arm, palm and five fingers. The aim is to develop an anthropomorphic five fingered robotic hand. The proposed design illustrates the use of 5 micro DC motors with 9 Degrees of Freedom (DOF.Each finger is controlled independently. Further three extra motors were used for the control of wrist elbow and base movement. The study of the DC motor is being carried out using the transfer function model for constant excitation. The micro DC motor performance was analyzed using MATLAB simulation environment. The whole system is implemented using flex sensors. The flex sensors placed on the human hand gloves appear as if they look like real human hand.  89v51 microcontroller was used for all the controlling actions along with RF transmitter/receiver .The performance of the system has been conducted experimentally and studied.

  3. Biomimetic actuator and sensor for robot hand

    International Nuclear Information System (INIS)

    Kim, Baekchul; Chung, Jinah; Cho, Hanjoung; Shin, Seunghoon; Lee, Hyoungsuk; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, Jachoon

    2012-01-01

    To manufacture a robot hand that essentially mimics the functions of a human hand, it is necessary to develop flexible actuators and sensors. In this study, we propose the design, manufacture, and performance verification of flexible actuators and sensors based on Electro Active Polymer (EAP). EAP is fabricated as a type of film, and it moves with changes in the voltage because of contraction and expansion in the polymer film. Furthermore, if a force is applied to an EAP film, its thickness and effective area change, and therefore, the capacitance also changes. By using this mechanism, we produce capacitive actuators and sensors. In this study, we propose an EAP based capacitive sensor and evaluate its use as a robot hand sensor

  4. Biomimetic actuator and sensor for robot hand

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Baekchul; Chung, Jinah; Cho, Hanjoung; Shin, Seunghoon; Lee, Hyoungsuk; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, Jachoon [Sungkyunkwan Univ., Seoul (Korea, Republic of)

    2012-12-15

    To manufacture a robot hand that essentially mimics the functions of a human hand, it is necessary to develop flexible actuators and sensors. In this study, we propose the design, manufacture, and performance verification of flexible actuators and sensors based on Electro Active Polymer (EAP). EAP is fabricated as a type of film, and it moves with changes in the voltage because of contraction and expansion in the polymer film. Furthermore, if a force is applied to an EAP film, its thickness and effective area change, and therefore, the capacitance also changes. By using this mechanism, we produce capacitive actuators and sensors. In this study, we propose an EAP based capacitive sensor and evaluate its use as a robot hand sensor.

  5. The Making of a 3D-Printed, Cable-Driven, Single-Model, Lightweight Humanoid Robotic Hand

    Directory of Open Access Journals (Sweden)

    Li Tian

    2017-12-01

    Full Text Available Dexterity robotic hands can (Cummings, 1996 greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs. It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017. The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.

  6. Referral of sensation to an advanced humanoid robotic hand prosthesis.

    Science.gov (United States)

    Rosén, Birgitta; Ehrsson, H Henrik; Antfolk, Christian; Cipriani, Christian; Sebelius, Fredrik; Lundborg, Göran

    2009-01-01

    Hand prostheses that are currently available on the market are used by amputees to only a limited extent, partly because of lack of sensory feedback from the artificial hand. We report a pilot study that showed how amputees can experience a robot-like advanced hand prosthesis as part of their own body. We induced a perceptual illusion by which touch applied to the stump of the arm was experienced from the artificial hand. This illusion was elicited by applying synchronous tactile stimulation to the hidden amputation stump and the robotic hand prosthesis in full view. In five people who had had upper limb amputations this stimulation caused referral touch sensation from the stump to the artificial hand, and the prosthesis was experienced more like a real hand. We also showed that this illusion can work when the amputee controls the movements of the artificial hand by recordings of the arm muscle activity with electromyograms. These observations indicate that the previously described "rubber hand illusion" is also valid for an advanced hand prosthesis, even when it has a robotic-like appearance.

  7. Kinematics/statics analysis of a novel serial-parallel robotic arm with hand

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yi; Dai, Zhuohong; Ye, Nijia; Wang, Peng [Yanshan University, Hebei (China)

    2015-10-15

    A robotic arm with fingered hand generally has multi-functions to complete various complicated operations. A novel serial-parallel robotic arm with a hand is proposed and its kinematics and statics are studied systematically. A 3D prototype of the serial-parallel robotic arm with a hand is constructed and analyzed by simulation. The serial-parallel robotic arm with a hand is composed of an upper 3RPS parallel manipulator, a lower 3SPR parallel manipulator and a hand with three finger mechanisms. Its kinematics formulae for solving the displacement, velocity, acceleration of are derived. Its statics formula for solving the active/constrained forces is derived. Its reachable workspace and orientation workspace are constructed and analyzed. Finally, an analytic example is given for solving the kinematics and statics of the serial-parallel robotic arm with a hand and the analytic solutions are verified by a simulation mechanism.

  8. Kinematics/statics analysis of a novel serial-parallel robotic arm with hand

    International Nuclear Information System (INIS)

    Lu, Yi; Dai, Zhuohong; Ye, Nijia; Wang, Peng

    2015-01-01

    A robotic arm with fingered hand generally has multi-functions to complete various complicated operations. A novel serial-parallel robotic arm with a hand is proposed and its kinematics and statics are studied systematically. A 3D prototype of the serial-parallel robotic arm with a hand is constructed and analyzed by simulation. The serial-parallel robotic arm with a hand is composed of an upper 3RPS parallel manipulator, a lower 3SPR parallel manipulator and a hand with three finger mechanisms. Its kinematics formulae for solving the displacement, velocity, acceleration of are derived. Its statics formula for solving the active/constrained forces is derived. Its reachable workspace and orientation workspace are constructed and analyzed. Finally, an analytic example is given for solving the kinematics and statics of the serial-parallel robotic arm with a hand and the analytic solutions are verified by a simulation mechanism.

  9. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  10. EthoHand: A dexterous robotic hand with ball-joint thumb enables complex in-hand object manipulation

    OpenAIRE

    Konnaris, C; Gavriel, C; Thomik, AAC; Aldo Faisal, A

    2016-01-01

    Our dexterous hand is a fundmanetal human feature that distinguishes us from other animals by enabling us to go beyond grasping to support sophisticated in-hand object manipulation. Our aim was the design of a dexterous anthropomorphic robotic hand that matches the human hand's 24 degrees of freedom, under-actuated by seven motors. With the ability to replicate human hand movements in a naturalistic manner including in-hand object manipulation. Therefore, we focused on the development of a no...

  11. Augmented robotic device for EVA hand manoeuvres

    Science.gov (United States)

    Matheson, Eloise; Brooker, Graham

    2012-12-01

    During extravehicular activities (EVAs), pressurised space suits can lead to difficulties in performing hand manoeuvres and fatigue. This is often the cause of EVAs being terminated early, or taking longer to complete. Assistive robotic gloves can be used to augment the natural motion of a human hand, meaning work can be carried out more efficiently with less stress to the astronaut. Lightweight and low profile solutions must be found in order for the assistive robotic glove to be easily integrated with a space suit pressure garment. Pneumatic muscle actuators combined with force sensors are one such solution. These actuators are extremely light, yet can output high forces using pressurised gases as the actuation drive. Their movement is omnidirectional, so when combined with a flexible exoskeleton that itself provides a degree of freedom of movement, individual fingers can be controlled during flexion and extension. This setup allows actuators and other hardware to be stored remotely on the user's body, resulting in the least possible mass being supported by the hand. Two prototype gloves have been developed at the University of Sydney; prototype I using a fibreglass exoskeleton to provide flexion force, and prototype II using torsion springs to achieve the same result. The gloves have been designed to increase the ease of human movements, rather than to add unnatural ability to the hand. A state space control algorithm has been developed to ensure that human initiated movements are recognised, and calibration methods have been implemented to accommodate the different characteristics of each wearer's hands. For this calibration technique, it was necessary to take into account the natural tremors of the human hand which may have otherwise initiated unexpected control signals. Prototype I was able to actuate the user's hand in 1 degree of freedom (DOF) from full flexion to partial extension, and prototype II actuated a user's finger in 2 DOF with forces achieved

  12. Human-like Compliance for Dexterous Robot Hands

    Science.gov (United States)

    Jau, Bruno M.

    1995-01-01

    This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications.

  13. A three-finger multisensory hand for dexterous space robotic tasks

    Science.gov (United States)

    Murase, Yuichi; Komada, Satoru; Uchiyama, Takashi; Machida, Kazuo; Akita, Kenzo

    1994-01-01

    The National Space Development Agency of Japan will launch ETS-7 in 1997, as a test bed for next generation space technology of RV&D and space robot. MITI has been developing a three-finger multisensory hand for complex space robotic tasks. The hand can be operated under remote control or autonomously. This paper describes the design and development of the hand and the performance of a breadboard model.

  14. Development and pilot testing of HEXORR: Hand EXOskeleton Rehabilitation Robot

    Directory of Open Access Journals (Sweden)

    Godfrey Sasha B

    2010-07-01

    Full Text Available Abstract Background Following acute therapeutic interventions, the majority of stroke survivors are left with a poorly functioning hemiparetic hand. Rehabilitation robotics has shown promise in providing patients with intensive therapy leading to functional gains. Because of the hand's crucial role in performing activities of daily living, attention to hand therapy has recently increased. Methods This paper introduces a newly developed Hand Exoskeleton Rehabilitation Robot (HEXORR. This device has been designed to provide full range of motion (ROM for all of the hand's digits. The thumb actuator allows for variable thumb plane of motion to incorporate different degrees of extension/flexion and abduction/adduction. Compensation algorithms have been developed to improve the exoskeleton's backdrivability by counteracting gravity, stiction and kinetic friction. We have also designed a force assistance mode that provides extension assistance based on each individual's needs. A pilot study was conducted on 9 unimpaired and 5 chronic stroke subjects to investigate the device's ability to allow physiologically accurate hand movements throughout the full ROM. The study also tested the efficacy of the force assistance mode with the goal of increasing stroke subjects' active ROM while still requiring active extension torque on the part of the subject. Results For 12 of the hand digits'15 joints in neurologically normal subjects, there were no significant ROM differences (P > 0.05 between active movements performed inside and outside of HEXORR. Interjoint coordination was examined in the 1st and 3rd digits, and no differences were found between inside and outside of the device (P > 0.05. Stroke subjects were capable of performing free hand movements inside of the exoskeleton and the force assistance mode was successful in increasing active ROM by 43 ± 5% (P Conclusions Our pilot study shows that this device is capable of moving the hand's digits through

  15. Development and pilot testing of HEXORR: Hand EXOskeleton Rehabilitation Robot

    Science.gov (United States)

    2010-01-01

    Background Following acute therapeutic interventions, the majority of stroke survivors are left with a poorly functioning hemiparetic hand. Rehabilitation robotics has shown promise in providing patients with intensive therapy leading to functional gains. Because of the hand's crucial role in performing activities of daily living, attention to hand therapy has recently increased. Methods This paper introduces a newly developed Hand Exoskeleton Rehabilitation Robot (HEXORR). This device has been designed to provide full range of motion (ROM) for all of the hand's digits. The thumb actuator allows for variable thumb plane of motion to incorporate different degrees of extension/flexion and abduction/adduction. Compensation algorithms have been developed to improve the exoskeleton's backdrivability by counteracting gravity, stiction and kinetic friction. We have also designed a force assistance mode that provides extension assistance based on each individual's needs. A pilot study was conducted on 9 unimpaired and 5 chronic stroke subjects to investigate the device's ability to allow physiologically accurate hand movements throughout the full ROM. The study also tested the efficacy of the force assistance mode with the goal of increasing stroke subjects' active ROM while still requiring active extension torque on the part of the subject. Results For 12 of the hand digits'15 joints in neurologically normal subjects, there were no significant ROM differences (P > 0.05) between active movements performed inside and outside of HEXORR. Interjoint coordination was examined in the 1st and 3rd digits, and no differences were found between inside and outside of the device (P > 0.05). Stroke subjects were capable of performing free hand movements inside of the exoskeleton and the force assistance mode was successful in increasing active ROM by 43 ± 5% (P < 0.001) and 24 ± 6% (P = 0.041) for the fingers and thumb, respectively. Conclusions Our pilot study shows that this device

  16. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control

    Science.gov (United States)

    Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash

    2012-06-01

    This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.

  17. Measuring empathy for human and robot hand pain using electroencephalography.

    Science.gov (United States)

    Suzuki, Yutaka; Galli, Lisa; Ikeda, Ayaka; Itakura, Shoji; Kitazaki, Michiteru

    2015-11-03

    This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.

  18. A multichannel-near-infrared-spectroscopy-triggered robotic hand rehabilitation system for stroke patients.

    Science.gov (United States)

    Lee, Jongseung; Mukae, Nobutaka; Arata, Jumpei; Iwata, Hiroyuki; Iramina, Keiji; Iihara, Koji; Hashizume, Makoto

    2017-07-01

    There is a demand for a new neurorehabilitation modality with a brain-computer interface for stroke patients with insufficient or no remaining hand motor function. We previously developed a robotic hand rehabilitation system triggered by multichannel near-infrared spectroscopy (NIRS) to address this demand. In a preliminary prototype system, a robotic hand orthosis, providing one degree-of-freedom motion for a hand's closing and opening, is triggered by a wireless command from a NIRS system, capturing a subject's motor cortex activation. To examine the feasibility of the prototype, we conducted a preliminary test involving six neurologically intact participants. The test comprised a series of evaluations for two aspects of neurorehabilitation training in a real-time manner: classification accuracy and execution time. The effects of classification-related factors, namely the algorithm, signal type, and number of NIRS channels, were investigated. In the comparison of algorithms, linear discrimination analysis performed better than the support vector machine in terms of both accuracy and training time. The oxyhemoglobin versus deoxyhemoglobin comparison revealed that the two concentrations almost equally contribute to the hand motion estimation. The relationship between the number of NIRS channels and accuracy indicated that a certain number of channels are needed and suggested a need for a method of selecting informative channels. The computation time of 5.84 ms was acceptable for our purpose. Overall, the preliminary prototype showed sufficient feasibility for further development and clinical testing with stroke patients.

  19. A pilot study of robotic-assisted exercise for hand weakness after stroke.

    Science.gov (United States)

    Stein, Joel; Bishop, Joel; Gillen, Glen; Helbok, Raimund

    2011-01-01

    Upper limb paresis is a major source of disability in stroke survivors, and robotic aided exercise therapy is a promising approach to enhance motor abilities. Few devices have been available to provide robotic therapy to the fingers and hand. We report an open-label pilot study of 12 individuals with chronic moderate hemiparesis after stroke who underwent a six-week training program using a hand robotic device. Participants received a total of 18 hours of robotic therapy. Improvements were found in multiple measures of motor performance, including the Upper Extremity Fugl-Meyer, the Motor Activity Log, the Manual Ability Measure-36, and the Jebsen Hand Function Test. All subjects tolerated the treatment well and no complications were observed. We conclude that robotic therapy for hand paresis after stroke is safe and feasible, and that further studies of efficacy are justified by these preliminary results. © 2011 IEEE

  20. Hand robotics rehabilitation: feasibility and preliminary results of a robotic treatment in patients with hemiparesis.

    Science.gov (United States)

    Sale, Patrizio; Lombardi, Valentina; Franceschini, Marco

    2012-01-01

    Background. No strongly clinical evidence about the use of hand robot-assisted therapy in stroke patients was demonstrated. This preliminary observer study was aimed at evaluating the efficacy of intensive robot-assisted therapy in hand function recovery, in the early phase after a stroke onset. Methods. Seven acute ischemic stroke patients at their first-ever stroke were enrolled. Treatment was performed using Amadeo robotic system (Tyromotion GmbH Graz, Austria). Each participant received, in addition to inpatients standard rehabilitative treatment, 20 sessions of robotic treatment for 4 consecutive weeks (5 days/week). Each session lasted for 40 minutes. The exercises were carried out as follows: passive modality (5 minutes), passive/plus modality (5 minutes), assisted therapy (10 minutes), and balloon (10 minutes). The following impairment and functional evaluations, Fugl-Meyer Scale (FM), Medical Research Council Scale for Muscle Strength (hand flexor and extensor muscles) (MRC), Motricity Index (MI), and modified Ashworth Scale for wrist and hand muscles (AS), were performed at the beginning (T0), after 10 sessions (T1), and at the end of the treatment (T2). The strength hand flexion and extension performed by Robot were assessed at T0 and T2. The Barthel Index and COMP (performance and satisfaction subscale) were assessed at T0 and T2. Results. Clinical improvements were found in all patients. No dropouts were recorded during the treatment and all subjects fulfilled the protocol. Evidence of a significant improvement was demonstrated by the Friedman test for the MRC (P hand motor recovery in acute stroke patients. The simplicity of the treatment, the lack of side effects, and the first positive results in acute stroke patients support the recommendations to extend the clinical trial of this treatment, in association with physiotherapy and/or occupational therapy.

  1. A Cross-Platform Tactile Capabilities Interface for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Jie eMa

    2016-04-01

    Full Text Available This article presents the core elements of a cross-platform tactile capabilities interface (TCI for humanoid arms. The aim of the interface is to reduce the cost of developing humanoid robot capabilities by supporting reuse through cross-platform deployment. The article presents a comparative analysis of existing robot middleware frameworks, as well as the technical details of the TCI framework that builds on the the existing YARP platform. The TCI framework currently includes robot arm actuators with robot skin sensors. It presents such hardware in a platform independent manner, making it possible to write robot control software that can be executed on different robots through the TCI frameworks. The TCI framework supports multiple humanoid platforms and this article also presents a case study of a cross-platform implementation of a set of tactile protective withdrawal reflexes that have been realised on both the Nao and iCub humanoid robot platforms using the same high-level source code.

  2. Sensing human hand motions for controlling dexterous robots

    Science.gov (United States)

    Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.

    1988-01-01

    The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.

  3. Design Of A Low Cost Anthropomorphic Robot Hand For Industrial Applications

    Science.gov (United States)

    Allen, P.; Raleigh, B.

    2009-11-01

    Autonomous grasping systems using anthropomorphic robotic end effectors have many applications, and the potential of such devices has inspired researchers to develop many types of grasping systems over the past 30 years. Their research has yielded significant advances in end effector dexterity and functionality. However, due to the cost and complexity associated with such devices, their role has been largely confined to that of being research tools in laboratories. Industry, by contrast, has largely opted for simple, single task, devices. This paper presents a novel low cost anthropomorphic robotic end effector, and in particular the design characteristics that make it more applicable to industrial application. The design brief was (i) to be broadly similar to the human hand in terms of size and performance (ii) be low cost (less than €5000 for the system) and (iii) to provide sufficient performance to allow use in industrial applications. Consisting of three fingers and an opposing thumb, the robotic hand developed has a total of 12 automated degrees of freedom. Another 4 degrees of freedom can be set manually. The specific design of the fingers and thumb, together with the drive arrangement utilizing synchronous belts, yields a simplified kinematics solution for the control of movement. The modular nature of the design is extended also to the palm, which can be easily modified to produce different overall work envelopes for the hand. The drive system and grasping strategies are also detailed.

  4. Development of a humanoid robot hand with coupling four-bar linkage

    Directory of Open Access Journals (Sweden)

    Xinhua Liu

    2017-01-01

    Full Text Available To improve the operating performance of robots’ end-effector, a humanoid robot hand based on coupling four-bar linkage was designed. An improved transmission system was proposed for the base joint of the thumb. Thus, a far greater motion range and more reasonable layout of the palm were obtained. Moreover, the mathematical model for kinematics simulation was presented based on the Assur linkage group theory to verify and optimize the proposed structure. To research the motion relationships between the fingers and the object in the process of grasping object, the grasping analysis of multi-finger manipulation was presented based on contact kinematics. Finally, a prototype of the humanoid robot hand was produced by a three-dimensional printer, and a kinematics simulation example and the workspace solving of the humanoid robot hand were carried out. The results showed that the velocities of finger joints approximately met the proportion relationship 1:1:1, which accorded with the grasping law of the human hand. In addition, the large workspace, reasonable layout, and good manipulability of the humanoid robot hand were verified.

  5. Natural control capabilities of robotic hands by hand amputated subjects.

    Science.gov (United States)

    Atzori, Manfredo; Gijsberts, Arjan; Caputo, Barbara; Muller, Henning

    2014-01-01

    People with transradial hand amputations who own a myoelectric prosthesis currently have some control capabilities via sEMG. However, the control systems are still limited and not natural. The Ninapro project is aiming at helping the scientific community to overcome these limits through the creation of publicly available electromyography data sources to develop and test machine learning algorithms. In this paper we describe the movement classification results gained from three subjects with an homogeneous level of amputation, and we compare them with the results of 40 intact subjects. The number of considered subjects can seem small at first sight, but it is not considering the literature of the field (which has to face the difficulty of recruiting trans-radial hand amputated subjects). The classification is performed with four different classifiers and the obtained balanced classification rates are up to 58.6% on 50 movements, which is an excellent result compared to the current literature. Successively, for each subject we find a subset of up to 9 highly independent movements, (defined as movements that can be distinguished with more than 90% accuracy), which is a deeply innovative step in literature. The natural control of a robotic hand in so many movements could lead to an immediate progress in robotic hand prosthetics and it could deeply change the quality of life of amputated subjects.

  6. The robot hand illusion: inducing proprioceptive drift through visuo-motor congruency.

    Science.gov (United States)

    Romano, Daniele; Caffa, Elisa; Hernandez-Arieta, Alejandro; Brugger, Peter; Maravita, Angelo

    2015-04-01

    The representation of one's own body sets the border of the self, but also shapes the space where we interact with external objects. Under particular conditions, such as in the rubber hand illusion external objects can be incorporated in one's own body representation, following congruent visuo-tactile stroking of one's own and a fake hand. This procedure induces an illusory sense of ownership for the fake hand and a shift of proprioceptive localization of the own hand towards the fake hand. Here we investigated whether pure visuo-motor, instead of visuo-tactile, congruency between one's own hand and a detached myoelectric-controlled robotic hand can induce similar embodiment effects. We found a shift of proprioceptive hand localization toward the robot hand, only following synchronized real hand/robot hand movements. Notably, no modulation was found of the sense of ownership following either synchronous or asynchronous-movement training. Our findings suggest that visuo-motor synchrony can drive the localization of one's own body parts in space, even when somatosensory input is kept constant and the experience of body ownership is maintained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Preliminary results of BRAVO project: brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks.

    Science.gov (United States)

    Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo

    2011-01-01

    This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE

  8. Robotic Hand Controlling Based on Flexible Sensor

    OpenAIRE

    Bilgin, Süleyman; Üser, Yavuz; Mercan, Muhammet

    2016-01-01

    Today's technology has increased the interest in robotic systems andincrease the number of studies realized in this area.  There are many studies on robotic systems inseveral fields to facilitate human life in the literature. In this study, arobot hand is designed to repeat finger movements depending upon flexiblesensors mounted on any wearable glove. In the literature, various sensors thatdetect the finger movement are used. The sensor that detects the angle of thefingers has b...

  9. Fusion of hard and soft control strategies for the robotic hand

    CERN Document Server

    Chen, Cheng-Hung

    2018-01-01

    Long considered the stuff of science fiction, a prosthetic hand capable of fully replicating all of that appendage's various functions is closer to becoming reality than ever before. This book provides a comprehensive report on exciting recent developments in hybrid control techniques—one of the most crucial hurdles to be overcome in creating smart prosthetic hands. Coauthored by two of the world's foremost pioneering experts in the field, Fusion of Hard and Soft Control Strategies for the Robotic Hand treats robotic hands for multiple applications. It begins with an overview of advances in main control techniques that have been made over the past decade before addressing the military context for affordable robotic hand technology with tactile and/or proprioceptive feedback for hand amputees. Kinematics, homogene us transformations, inverse and differential kinematics, trajectory planning, and dynamic models of two-link thumb and three-link index finger are discussed in detail. The remainder of the book is...

  10. Effects of electromyography-driven robot-aided hand training with neuromuscular electrical stimulation on hand control performance after chronic stroke.

    Science.gov (United States)

    Rong, Wei; Tong, Kai Yu; Hu, Xiao Ling; Ho, Sze Kit

    2015-03-01

    An electromyography-driven robot system integrated with neuromuscular electrical stimulation (NMES) was developed to investigate its effectiveness on post-stroke rehabilitation. The performance of this system in assisting finger flexion/extension with different assistance combinations was evaluated in five stroke subjects. Then, a pilot study with 20-sessions training was conducted to evaluate the training's effectiveness. The results showed that combined assistance from the NMES-robot could improve finger movement accuracy, encourage muscle activation of the finger muscles and suppress excessive muscular activities in the elbow joint. When assistances from both NMES and the robot were 50% of their maximum assistances, finger-tracking performance had the best results, with the lowest root mean square error, greater range of motion, higher voluntary muscle activations of the finger joints and lower muscle co-contraction in the finger and elbow joints. Upper limb function improved after the 20-session training, indicated by the increased clinical scores of Fugl-Meyer Assessment, Action Research Arm Test and Wolf Motor Function Test. Muscle co-contraction was reduced in the finger and elbow joints reflected by the Modified Ashworth Scale. The findings demonstrated that an electromyography-driven NMES-robot used for chronic stroke improved hand function and tracking performance. Further research is warranted to validate the method on a larger scale. Implications for Rehabilitation The hand robotics and neuromuscular electrical stimulation (NMES) techniques are still separate systems in current post-stroke hand rehabilitation. This is the first study to investigate the combined effects of the NMES and robot on hand rehabilitation. The finger tracking performance was improved with the combined assistance from the EMG-driven NMES-robot hand system. The assistance from the robot could improve the finger movement accuracy and the assistance from the NMES could reduce the

  11. Design of a robotic device for assessment and rehabilitation of hand sensory function.

    Science.gov (United States)

    Lambercy, Olivier; Robles, Alejandro Juárez; Kim, Yeongmi; Gassert, Roger

    2011-01-01

    This paper presents the design and implementation of the Robotic Sensory Trainer, a robotic interface for assessment and therapy of hand sensory function. The device can provide three types of well controlled stimuli: (i) angular displacement at the metacarpophalangeal (MCP) joint using a remote-center-of-motion double-parallelogram structure, (ii) vibration stimuli at the fingertip, proximal phalange and palm, and (iii) pressure at the fingertip, while recording position, interaction force and feedback from the user over a touch screen. These stimuli offer a novel platform to investigate sensory perception in healthy subjects and patients with sensory impairments, with the potential to assess deficits and actively train detection of specific sensory cues in a standardized manner. A preliminary study with eight healthy subjects demonstrates the feasibility of using the Robotic Sensory Trainer to assess the sensory perception threshold in MCP angular position. An average just noticeable difference (JND) in the MCP joint angle of 2.46° (14.47%) was found, which is in agreement with previous perception studies. © 2011 IEEE

  12. Hand Robotics Rehabilitation: Feasibility and Preliminary Results of a Robotic Treatment in Patients with Hemiparesis

    Directory of Open Access Journals (Sweden)

    Patrizio Sale

    2012-01-01

    Full Text Available Background. No strongly clinical evidence about the use of hand robot-assisted therapy in stroke patients was demonstrated. This preliminary observer study was aimed at evaluating the efficacy of intensive robot-assisted therapy in hand function recovery, in the early phase after a stroke onset. Methods. Seven acute ischemic stroke patients at their first-ever stroke were enrolled. Treatment was performed using Amadeo robotic system (Tyromotion GmbH Graz, Austria. Each participant received, in addition to inpatients standard rehabilitative treatment, 20 sessions of robotic treatment for 4 consecutive weeks (5 days/week. Each session lasted for 40 minutes. The exercises were carried out as follows: passive modality (5 minutes, passive/plus modality (5 minutes, assisted therapy (10 minutes, and balloon (10 minutes. The following impairment and functional evaluations, Fugl-Meyer Scale (FM, Medical Research Council Scale for Muscle Strength (hand flexor and extensor muscles (MRC, Motricity Index (MI, and modified Ashworth Scale for wrist and hand muscles (AS, were performed at the beginning (T0, after 10 sessions (T1, and at the end of the treatment (T2. The strength hand flexion and extension performed by Robot were assessed at T0 and T2. The Barthel Index and COMP (performance and satisfaction subscale were assessed at T0 and T2. Results. Clinical improvements were found in all patients. No dropouts were recorded during the treatment and all subjects fulfilled the protocol. Evidence of a significant improvement was demonstrated by the Friedman test for the MRC (P<0.0123. Evidence of an improvement was demonstrated for AS, FM, and MI. Conclusions. This original rehabilitation treatment could contribute to increase the hand motor recovery in acute stroke patients. The simplicity of the treatment, the lack of side effects, and the first positive results in acute stroke patients support the recommendations to extend the clinical trial of this

  13. Hand Robotic Therapy in Children with Hemiparesis: A Pilot Study.

    Science.gov (United States)

    Bishop, Lauri; Gordon, Andrew M; Kim, Heakyung

    2017-01-01

    The aim of this study was to understand the impact of training with a hand robotic device on hand paresis and function in a population of children with hemiparesis. Twelve children with hemiparesis (mean age, 9 [SD, 3.64] years) completed participation in this prospective, experimental, pilot study. Participants underwent clinical assessments at baseline and again 6 weeks later with instructions to not initiate new therapies. After these assessments, participants received 6 weeks of training with a hand robotic device, consisting of 1-hour sessions, 3 times weekly. Assessments were repeated on completion of training. Results showed significant improvements after training on the Assisting Hand Assessment (mean difference, 2.0 Assisting Hand Assessment units; P = 0.011) and on the upper-extremity component of the Fugl-Meyer scale (raw score mean difference, 4.334; P = 0.001). No significant improvements between pretest and posttest were noted on the Jebsen-Taylor Test of Hand Function, the Quality of Upper Extremity Skills Test, or the Pediatric Evaluation of Disability Inventory after intervention. Total active mobility of digits and grip strength also failed to demonstrate significant changes after training. Participants tolerated training with the hand robotic device, and significant improvements in bimanual hand use, as well as impairment-based scales, were noted. Improvements were carried over into bimanual skills during play. Complete the self-assessment activity and evaluation online at http://www.physiatry.org/JournalCME CME OBJECTIVES: Upon completion of this article, the reader should be able to: (1) Understand key components of neuroplasticity; (2) Discuss the benefits of robotic therapy in the recovery of hand function in pediatric patients with hemiplegia; and (3) Appropriately incorporate robotic therapy into the treatment plan of pediatric patients with hemiplegia. Advanced ACCREDITATION: The Association of Academic Physiatrists is accredited by the

  14. A Distributed Tactile Sensor for Intuitive Human-Robot Interfacing

    Directory of Open Access Journals (Sweden)

    Andrea Cirillo

    2017-01-01

    Full Text Available Safety of human-robot physical interaction is enabled not only by suitable robot control strategies but also by suitable sensing technologies. For example, if distributed tactile sensors were available on the robot, they could be used not only to detect unintentional collisions, but also as human-machine interface by enabling a new mode of social interaction with the machine. Starting from their previous works, the authors developed a conformable distributed tactile sensor that can be easily conformed to the different parts of the robot body. Its ability to estimate contact force components and to provide a tactile map with an accurate spatial resolution enables the robot to handle both unintentional collisions in safe human-robot collaboration tasks and intentional touches where the sensor is used as human-machine interface. In this paper, the authors present the characterization of the proposed tactile sensor and they show how it can be also exploited to recognize haptic tactile gestures, by tailoring recognition algorithms, well known in the image processing field, to the case of tactile images. In particular, a set of haptic gestures has been defined to test three recognition algorithms on a group of 20 users. The paper demonstrates how the same sensor originally designed to manage unintentional collisions can be successfully used also as human-machine interface.

  15. Human-Manipulator Interface Using Particle Filter

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2014-01-01

    Full Text Available This paper utilizes a human-robot interface system which incorporates particle filter (PF and adaptive multispace transformation (AMT to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator.

  16. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of

  17. Control of a Robotic Hand Using a Tongue Control System-A Prosthesis Application.

    Science.gov (United States)

    Johansen, Daniel; Cipriani, Christian; Popovic, Dejan B; Struijk, Lotte N S A

    2016-07-01

    The aim of this study was to investigate the feasibility of using an inductive tongue control system (ITCS) for controlling robotic/prosthetic hands and arms. This study presents a novel dual modal control scheme for multigrasp robotic hands combining standard electromyogram (EMG) with the ITCS. The performance of the ITCS control scheme was evaluated in a comparative study. Ten healthy subjects used both the ITCS control scheme and a conventional EMG control scheme to complete grasping exercises with the IH1 Azzurra robotic hand implementing five grasps. Time to activate a desired function or grasp was used as the performance metric. Statistically significant differences were found when comparing the performance of the two control schemes. On average, the ITCS control scheme was 1.15 s faster than the EMG control scheme, corresponding to a 35.4% reduction in the activation time. The largest difference was for grasp 5 with a mean AT reduction of 45.3% (2.38 s). The findings indicate that using the ITCS control scheme could allow for faster activation of specific grasps or functions compared with a conventional EMG control scheme. For transhumeral and especially bilateral amputees, the ITCS control scheme could have a significant impact on the prosthesis control. In addition, the ITCS would provide bilateral amputees with the additional advantage of environmental and computer control for which the ITCS was originally developed.

  18. Development of Advanced Robotic Hand System for space application

    Science.gov (United States)

    Machida, Kazuo; Akita, Kenzo; Mikami, Tatsuo; Komada, Satoru

    1994-01-01

    The Advanced Robotic Hand System (ARH) is a precise telerobotics system with a semi dexterous hand for future space application. The ARH will be tested in space as one of the missions of the Engineering Tests Satellite 7 (ETS-7) which will be launched in 1997. The objectives of the ARH development are to evaluate the capability of a possible robot hand for precise and delicate tasks and to validate the related technologies implemented in the system. The ARH is designed to be controlled both from ground as a teleoperation and by locally autonomous control. This paper presents the overall system design and the functional capabilities of the ARH as well as its mission outline as the preliminary design has been completed.

  19. 2D Hand Tracking Based on Flocking with Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Zihong Chen

    2014-02-01

    Full Text Available Hand gesture-based interaction provides a natural and powerful means for human-computer interaction. It is also a good interface for human-robot interaction. However, most of the existing proposals are likely to fail when they meet some skin-coloured objects, especially the face region. In this paper, we present a novel hand tracking method which can track the features of the hand based on the obstacle avoidance flocking behaviour model to overcome skin-coloured distractions. It allows features to be split into two groups under severe distractions and merge later. The experiment results show that our method can track the hand in a cluttered background or when passing the face, while the Flocking of Features (FoF and the Mean Shift Embedded Particle Filter (MSEPF methods may fail. These results suggest that our method has better performance in comparison with the previous methods. It may therefore be helpful to promote the use of the hand gesture-based human-robot interaction method.

  20. What if the hand piece spring disassembles during robotic radical prostatectomy?

    Science.gov (United States)

    Akbulut, Ziya; Canda, Abdullah Erdem; Atmaca, Ali Fuat; Asil, Erem; Isgoren, Egemen; Balbay, Mevlana Derya

    2011-01-01

    Robot-assisted laparoscopic radical prostatectomy (RALRP) is successfully being performed for treating prostate cancer (PCa). However, instrumentation failure associated with robotic procedures represents a unique new problem. We report the successful completion of RALRP in spite of a disassembled hand piece spring during the procedure. A PubMed/Medline search was made concerning robotic malfunction and robot-assisted laparoscopic radical prostatectomy to discuss our experience. We performed RALRP in a 60-year-old male patient with localized PCa. During the procedure, the spring of the hand piece disassembled, and we were not able to reassemble it. We completed the procedure successfully however without fixing the disassembled hand piece spring. We were able to grasp tissue and needles when we brought our fingers together. The only movement we needed to do was to move fingers apart to release tissue or needles caught by robotic instrument. Although malfunction risk related to the da Vinci Surgical System seems to be very low, it might still occur. Sometimes, simple maneuvers may compensate for the failed function as occurred in our case. However, patients should be informed before the operation about the possibility of converting their procedure to laparoscopic or open due to robotic malfunction.

  1. An Intelligent Inference System for Robot Hand Optimal Grasp Preshaping

    Directory of Open Access Journals (Sweden)

    Cabbar Veysel Baysal

    2010-11-01

    Full Text Available This paper presents a novel Intelligent Inference System (IIS for the determination of an optimum preshape for multifingered robot hand grasping, given object under a manipulation task. The IIS is formed as hybrid agent architecture, by the synthesis of object properties, manipulation task characteristics, grasp space partitioning, lowlevel kinematical analysis, evaluation of contact wrench patterns via fuzzy approximate reasoning and ANN structure for incremental learning. The IIS is implemented in software with a robot hand simulation.

  2. Durable Tactile Glove for Human or Robot Hand

    Science.gov (United States)

    Butzer, Melissa; Diftler, Myron A.; Huber, Eric

    2010-01-01

    A glove containing force sensors has been built as a prototype of tactile sensor arrays to be worn on human hands and anthropomorphic robot hands. The force sensors of this glove are mounted inside, in protective pockets; as a result of this and other design features, the present glove is more durable than earlier models.

  3. Maps managing interface design for a mobile robot navigation governed by a BCI

    International Nuclear Information System (INIS)

    Auat Cheein, Fernando A; Carelli, Ricardo; Celeste, Wanderley Cardoso; Freire Bastos, Teodiano; Di Sciascio, Fernando

    2007-01-01

    In this paper, a maps managing interface is proposed. This interface is governed by a Brain Computer Interface (BCI), which also governs a mobile robot's movements. If a robot is inside a known environment, the user can load a map from the maps managing interface in order to navigate it. Otherwise, if the robot is in an unknown environment, a Simultaneous Localization and Mapping (SLAM) algorithm is released in order to obtain a probabilistic grid map of that environment. Then, that map is loaded into the map database for future navigations. While slamming, the user has a direct control of the robot's movements via the BCI. The complete system is applied to a mobile robot and can be also applied to an autonomous wheelchair, which has the same kinematics. Experimental results are also shown

  4. Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration

    Science.gov (United States)

    Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...

  5. Robotic hand with locking mechanism using TCP muscles for applications in prosthetic hand and humanoids

    Science.gov (United States)

    Saharan, Lokesh; Tadesse, Yonas

    2016-04-01

    This paper presents a biomimetic, lightweight, 3D printed and customizable robotic hand with locking mechanism consisting of Twisted and Coiled Polymer (TCP) muscles based on nylon precursor fibers as artificial muscles. Previously, we have presented a small-sized biomimetic hand using nylon based artificial muscles and fishing line muscles as actuators. The current study focuses on an adult-sized prosthetic hand with improved design and a position/force locking system. Energy efficiency is always a matter of concern to make compact, lightweight, durable and cost effective devices. In natural human hand, if we keep holding objects for long time, we get tired because of continuous use of energy for keeping the fingers in certain positions. Similarly, in prosthetic hands we also need to provide energy continuously to artificial muscles to hold the object for a certain period of time, which is certainly not energy efficient. In this work we, describe the design of the robotic hand and locking mechanism along with the experimental results on the performance of the locking mechanism.

  6. Hand-assisted hybrid laparoscopic-robotic total proctocolectomy with ileal pouch--anal anastomosis.

    Science.gov (United States)

    Morelli, Luca; Guadagni, Simone; Mariniello, Maria Donatella; Furbetta, Niccolò; Pisano, Roberta; D'Isidoro, Cristiano; Caprili, Giovanni; Marciano, Emanuele; Di Candio, Giulio; Boggi, Ugo; Mosca, Franco

    2015-08-01

    Few studies have reported minimally invasive total proctocolectomy with ileal pouch-anal anastomosis (IPAA) for ulcerative colitis (UC) and familial adenomatous polyposis (FAP). We herein report a novel hand-assisted hybrid laparoscopic-robotic technique for patients with FAP and UC. Between February 2010 and March 2014, six patients underwent hand-assisted hybrid laparoscopic-robotic total proctocolectomy with IPAA. The abdominal colectomy was performed laparoscopically with hand assistance through a transverse suprapubic incision, also used to fashion the ileal pouch. The proctectomy was carried out with the da Vinci Surgical System. The IPAA was hand-sewn through a trans-anal approach. The procedure was complemented by a temporary diverting loop ileostomy. The mean hand-assisted laparoscopic surgery (HALS) time was 154.6 (±12.8) min whereas the mean robotic time was 93.6 (±8.1) min. In all cases, a nerve-sparing proctectomy was performed, and no conversion to traditional laparotomy was required. The mean postoperative hospital stay was 13.2 (±7.4) days. No anastomotic leakage was observed. To date, no autonomic neurological disorders have been observed with a mean of 5.8 (±1.3) bowel movements per day. The hand-assisted hybrid laparoscopic-robotic approach to total proctocolectomy with IPAA has not been previously described. Our report shows the feasibility of this hybrid approach, which surpasses most of the limitations of pure laparoscopic and robotic techniques. Further experience is necessary to refine the technique and fully assess its potential advantages.

  7. Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery.

    Science.gov (United States)

    Travaglini, T A; Swaney, P J; Weaver, Kyle D; Webster, R J

    The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap.

  8. Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery

    Science.gov (United States)

    Travaglini, T. A.; Swaney, P. J.; Weaver, Kyle D.; Webster, R. J.

    2016-01-01

    The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap. PMID:26752501

  9. Maps managing interface design for a mobile robot navigation governed by a BCI

    Energy Technology Data Exchange (ETDEWEB)

    Auat Cheein, Fernando A [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina); Carelli, Ricardo [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina); Celeste, Wanderley Cardoso [Electrical Engineering Department, Federal University of Espirito Santo. Fernando Ferrari, 514 29075-910 Vitoria-ES (Brazil); Freire Bastos, Teodiano [Electrical Engineering Department, Federal University of Espirito Santo. Fernando Ferrari, 514 29075-910 Vitoria-ES (Brazil); Di Sciascio, Fernando [Institute of Automatic, National University of San Juan. San Martin, 1109 - Oeste 5400 San Juan (Argentina)

    2007-11-15

    In this paper, a maps managing interface is proposed. This interface is governed by a Brain Computer Interface (BCI), which also governs a mobile robot's movements. If a robot is inside a known environment, the user can load a map from the maps managing interface in order to navigate it. Otherwise, if the robot is in an unknown environment, a Simultaneous Localization and Mapping (SLAM) algorithm is released in order to obtain a probabilistic grid map of that environment. Then, that map is loaded into the map database for future navigations. While slamming, the user has a direct control of the robot's movements via the BCI. The complete system is applied to a mobile robot and can be also applied to an autonomous wheelchair, which has the same kinematics. Experimental results are also shown.

  10. Analysis of relative displacement between the HX wearable robotic exoskeleton and the user's hand.

    Science.gov (United States)

    Cempini, Marco; Marzegan, Alberto; Rabuffetti, Marco; Cortese, Mario; Vitiello, Nicola; Ferrarin, Maurizio

    2014-10-18

    Advances in technology are allowing for the production of several viable wearable robotic devices to assist with activities of daily living and with rehabilitation. One of the most pressing limitations to user satisfaction is the lack of consistency in motion between the user and the robotic device. The displacement between the robot and the body segment may not correspond because of differences in skin and tissue compliance, mechanical backlash, and/or incorrect fit. This report presents the results of an analysis of relative displacement between the user's hand and a wearable exoskeleton, the HX. HX has been designed to maximize comfort, wearability and user safety, exploiting chains with multiple degrees-of-freedom with a modular architecture. These appealing features may introduce several uncertainties in the kinematic performances, especially when considering the anthropometry, morphology and degree of mobility of the human hand. The small relative displacements between the hand and the exoskeleton were measured with a video-based motion capture system, while the user executed several different grips in different exoskeleton modes. The analysis furnished quantitative results about the device performance, differentiated among device modules and test conditions. In general, the global relative displacement for the distal part of the device was in the range 0.5-1.5 mm, while within 3 mm (worse but still acceptable) for displacements nearest to the hand dorsum. Conclusions over the HX design principles have been drawn, as well as guidelines for future developments.

  11. Multichannel noninvasive human-machine interface via stretchable µm thick sEMG patches for robot manipulation

    Science.gov (United States)

    Zhou, Ying; Wang, Youhua; Liu, Runfeng; Xiao, Lin; Zhang, Qin; Huang, YongAn

    2018-01-01

    Epidermal electronics (e-skin) emerging in recent years offer the opportunity to noninvasively and wearably extract biosignals from human bodies. The conventional processes of e-skin based on standard microelectronic fabrication processes and a variety of transfer printing methods, nevertheless, unquestionably constrains the size of the devices, posing a serious challenge to collecting signals via skin, the largest organ in the human body. Herein we propose a multichannel noninvasive human-machine interface (HMI) using stretchable surface electromyography (sEMG) patches to realize a robot hand mimicking human gestures. Time-efficient processes are first developed to manufacture µm thick large-scale stretchable devices. With micron thickness, the stretchable µm thick sEMG patches show excellent conformability with human skin and consequently comparable electrical performance with conventional gel electrodes. Combined with the large-scale size, the multichannel noninvasive HMI via stretchable µm thick sEMG patches successfully manipulates the robot hand with eight different gestures, whose precision is as high as conventional gel electrodes array.

  12. Pantomimic gestures for human-robot interaction

    CSIR Research Space (South Africa)

    Burke, Michael G

    2015-10-01

    Full Text Available -1 IEEE TRANSACTIONS ON ROBOTICS 1 Pantomimic Gestures for Human-Robot Interaction Michael Burke, Student Member, IEEE, and Joan Lasenby Abstract This work introduces a pantomimic gesture interface, which classifies human hand gestures using...

  13. Mobile Mixed-Reality Interfaces That Enhance Human–Robot Interaction in Shared Spaces

    Directory of Open Access Journals (Sweden)

    Jared A. Frank

    2017-06-01

    Full Text Available Although user interfaces with gesture-based input and augmented graphics have promoted intuitive human–robot interactions (HRI, they are often implemented in remote applications on research-grade platforms requiring significant training and limiting operator mobility. This paper proposes a mobile mixed-reality interface approach to enhance HRI in shared spaces. As a user points a mobile device at the robot’s workspace, a mixed-reality environment is rendered providing a common frame of reference for the user and robot to effectively communicate spatial information for performing object manipulation tasks, improving the user’s situational awareness while interacting with augmented graphics to intuitively command the robot. An evaluation with participants is conducted to examine task performance and user experience associated with the proposed interface strategy in comparison to conventional approaches that utilize egocentric or exocentric views from cameras mounted on the robot or in the environment, respectively. Results indicate that, despite the suitability of the conventional approaches in remote applications, the proposed interface approach provides comparable task performance and user experiences in shared spaces without the need to install operator stations or vision systems on or around the robot. Moreover, the proposed interface approach provides users the flexibility to direct robots from their own visual perspective (at the expense of some physical workload and leverages the sensing capabilities of the tablet to expand the robot’s perceptual range.

  14. Universal Robot Hand Equipped with Tactile and Joint Torque Sensors: Development and Experiments on Stiffness Control and Object Recognition

    Directory of Open Access Journals (Sweden)

    Hiroyuki NAKAMOTO

    2007-04-01

    Full Text Available Various humanoid robots have been developed and multifunction robot hands which are able to attach those robots like human hand is needed. But a useful robot hand has not been depeveloped, because there are a lot of problems such as control method of many degrees of freedom and processing method of enormous sensor outputs. Realizing such robot hand, we have developed five-finger robot hand. In this paper, the detailed structure of developed robot hand is described. The robot hand we developed has five fingers of multi-joint that is equipped with joint torque sensors and tactile sensors. We report experimental results of a stiffness control with the developed robot hand. Those results show that it is possible to change the stiffness of joints. Moreover we propose an object recognition method with the tactile sensor. The validity of that method is assured by experimental results.

  15. Low-cost design and fabrication of an anthropomorphic robotic hand.

    Science.gov (United States)

    Junaid, Ali Bin; Tahir, Sanan; Rasheed, Tahir; Ahmed, Sharjeel; Sohail, Mehreen; Afzal, Muhammad Raheel; Ali, Muzaffar; Kim, Yoonsoo

    2014-10-01

    Human hand signifies a magnificent and challenging example for scientists and engineers trying to replicate its complex structure and functionality. This paper proposes a bio-mechatronic approach for the design of an anthropomorphic artificial hand capable of performing basic human hand motions with fundamental gripping functionality. The dexterity of the artificial hand is exhibited by imitating the natural motion of the human fingers. Imitation is produced according to the data acquired from the flex sensors attached to the human fingers. In order to have proper gripping, closed-loop control is implemented using the tactile sensors. Feedback for the closed-loop control is provided by force sensing resistors (FSRs), attached on the fingertips of the robotic hand. These sensors also enable handling of fragile objects. The mathematical model is derived using forward kinematics and also simulated on MATLAB to ascertain the position of robotic fingers in 3D space.

  16. Research on direct calibration method of eye-to-hand system of robot

    Science.gov (United States)

    Hu, Xiaoping; Xie, Ke; Peng, Tao

    2013-10-01

    In the position-based visual servoing control for robot, the hand-eye calibration is very important because it can affect the control precision of the system. According to the robot with eye-to-hand stereovision system, this paper proposes a direct method of hand-eye calibration. The method utilizes the triangle measuring principle to solve the coordinates in the camera coordinate system of scene point. It calculates the estimated coordinates by the hand-eye calibration equation set which indicates the transformational relation from the robot to the camera coordinate system, and then uses the error of actual and estimated coordinates to establish the objective function. Finally the method substitutes the parameters into the function repeatedly until it converged to optimize the result. The related experiment compared the measured coordinates with the actual coordinates, shows the efficiency and the precision of it.

  17. The cortical activation pattern by a rehabilitation robotic hand: a functional NIRS study.

    Science.gov (United States)

    Chang, Pyung-Hun; Lee, Seung-Hee; Gu, Gwang Min; Lee, Seung-Hyun; Jin, Sang-Hyun; Yeo, Sang Seok; Seo, Jeong Pyo; Jang, Sung Ho

    2014-01-01

    Clarification of the relationship between external stimuli and brain response has been an important topic in neuroscience and brain rehabilitation. In the current study, using functional near infrared spectroscopy (fNIRS), we attempted to investigate cortical activation patterns generated during execution of a rehabilitation robotic hand. Ten normal subjects were recruited for this study. Passive movements of the right fingers were performed using a rehabilitation robotic hand at a frequency of 0.5 Hz. We measured values of oxy-hemoglobin (HbO), deoxy-hemoglobin (HbR) and total-hemoglobin (HbT) in five regions of interest: the primary sensory-motor cortex (SM1), hand somatotopy of the contralateral SM1, supplementary motor area (SMA), premotor cortex (PMC), and prefrontal cortex (PFC). HbO and HbT values indicated significant activation in the left SM1, left SMA, left PMC, and left PFC during execution of the rehabilitation robotic hand (uncorrected, p < 0.01). By contrast, HbR value indicated significant activation only in the hand somatotopic area of the left SM1 (uncorrected, p < 0.01). Our results appear to indicate that execution of the rehabilitation robotic hand could induce cortical activation.

  18. The cortical activation pattern by a rehabilitation robotic hand : A functional NIRS study

    Directory of Open Access Journals (Sweden)

    Pyung Hun eChang

    2014-02-01

    Full Text Available Introduction: Clarification of the relationship between external stimuli and brain response has been an important topic in neuroscience and brain rehabilitation. In the current study, using functional near infrared spectroscopy (fNIRS, we attempted to investigate cortical activation patterns generated during execution of a rehabilitation robotic hand. Methods: Ten normal subjects were recruited for this study. Passive movements of the right fingers were performed using a rehabilitation robotic hand at a frequency of 0.5 Hz. We measured values of oxy-hemoglobin(HbO, deoxy-hemoglobin(HbR and total-hemoglobin(HbT in five regions of interest: the primary sensory-motor cortex (SM1, hand somatotopy of the contralateral SM1, supplementary motor area (SMA, premotor cortex (PMC, and prefrontal cortex (PFC. Results: HbO and HbT values indicated significant activation in the left SM1, left SMA, left PMC, and left PFC during execution of the rehabilitation robotic hand(uncorrected, pConclusions: Our results appear to indicate that execution of the rehabilitation robotic hand could induce cortical activation.

  19. Robot training for hand motor recovery in subacute stroke patients: A randomized controlled trial.

    Science.gov (United States)

    Orihuela-Espina, Felipe; Roldán, Giovana Femat; Sánchez-Villavicencio, Israel; Palafox, Lorena; Leder, Ronald; Sucar, Luis Enrique; Hernández-Franco, Jorge

    2016-01-01

    Evidence of superiority of robot training for the hand over classical therapies in stroke patients remains controversial. During the subacute stage, hand training is likely to be the most useful. To establish whether robot active assisted therapies provides any additional motor recovery for the hand when administered during the subacute stage (robot based therapies for hand recovery will show significant differences at subacute stages. A randomized clinical trial. A between subjects randomized controlled trial was carried out on subacute stroke patients (n = 17) comparing robot active assisted therapy (RT) with a classical occupational therapy (OT). Both groups received 40 sessions ensuring at least 300 repetitions per session. Treatment duration was (mean ± std) 2.18 ± 1.25 months for the control group and 2.44 ± 0.88 months for the study group. The primary outcome was motor dexterity changes assessed with the Fugl-Meyer (FMA) and the Motricity Index (MI). Both groups (OT: n = 8; RT: n = 9) exhibited significant improvements over time (Non-parametric Cliff's delta-within effect sizes: dwOT-FMA = 0.5, dwOT-MI = 0.5, dwRT-FMA = 1, dwRT-MI = 1). Regarding differences between the therapies; the Fugl-Meyer score indicated a significant advantage for the hand training with the robot (FMA hand: WRS: W = 8, p hand prehension for RT with respect to OT but failed to reach significance (MI prehension: W = 17.5, p = 0.080). No harm occurred. Robotic therapies may be useful during the subacute stages of stroke - both endpoints (FM hand and MI prehension) showed the expected trend with bigger effect size for the robotic intervention. Additional benefit of the robotic therapy over the control therapy was only significant when the difference was measured with FM, demanding further investigation with larger samples. Implications of this study are important for decision making during therapy administration and resource allocation. Copyright © 2016 Hanley

  20. A General Contact Force Analysis of an Under-Actuated Finger in Robot Hand Grasping

    Directory of Open Access Journals (Sweden)

    Xuan Vinh Ha

    2016-02-01

    Full Text Available This paper develops a mathematical analysis of contact forces for the under-actuated finger in a general under-actuated robotic hand during grasping. The concept of under-actuation in robotic grasping with fewer actuators than degrees of freedom (DOF, through the use of springs and mechanical limits, allows the hand to adjust itself to an irregularly shaped object without complex control strategies and sensors. Here the main concern is the contact forces, which are important elements in grasping tasks, based on the proposed mathematical analysis of their distributions of the n-DOF under-actuated finger. The simulation results, along with the 3-DOF finger from the ADAMS model, show the effectiveness of the mathematical analysis method, while comparing them with the measured results. The system can find magnitudes of the contact forces at the contact positions between the phalanges and the object.

  1. An EMG-driven exoskeleton hand robotic training device on chronic stroke subjects: task training system for stroke rehabilitation.

    Science.gov (United States)

    Ho, N S K; Tong, K Y; Hu, X L; Fung, K L; Wei, X J; Rong, W; Susanto, E A

    2011-01-01

    An exoskeleton hand robotic training device is specially designed for persons after stroke to provide training on their impaired hand by using an exoskeleton robotic hand which is actively driven by their own muscle signals. It detects the stroke person's intention using his/her surface electromyography (EMG) signals from the hemiplegic side and assists in hand opening or hand closing functional tasks. The robotic system is made up of an embedded controller and a robotic hand module which can be adjusted to fit for different finger length. Eight chronic stroke subjects had been recruited to evaluate the effects of this device. The preliminary results showed significant improvement in hand functions (ARAT) and upper limb functions (FMA) after 20 sessions of robot-assisted hand functions task training. With the use of this light and portable robotic device, stroke patients can now practice more easily for the opening and closing of their hands at their own will, and handle functional daily living tasks at ease. A video is included together with this paper to give a demonstration of the hand robotic system on chronic stroke subjects and it will be presented in the conference. © 2011 IEEE

  2. Control of a mobile robot through brain computer interface

    Directory of Open Access Journals (Sweden)

    Robinson Jimenez Moreno

    2015-07-01

    Full Text Available This paper poses a control interface to command the movement of a mobile robot according to signals captured from the user's brain. These signals are acquired and interpreted by Emotiv EPOC device, a 14-electrode type sensor which captures electroencephalographic (EEG signals with high resolution, which, in turn, are sent to a computer for processing. One brain-computer interface (BCI was developed based on the Emotiv software and SDK in order to command the mobile robot from a distance. Functionality tests are performed with the sensor to discriminate shift intentions of a user group, as well as with a fuzzy controller to hold the direction in case of concentration loss. As conclusion, it was possible to obtain an efficient system for robot movements by brain commands.

  3. Development of five-finger robotic hand using master-slave control for hand-assisted laparoscopic surgery.

    Science.gov (United States)

    Yoshida, Koki; Yamada, Hiroshi; Kato, Ryu; Seki, Tatsuya; Yokoi, Hiroshi; Mukai, Masaya

    2016-08-01

    This study aims to develop a robotic hand as a substitute for a surgeon's hand in hand-assisted laparoscopic surgery (HALS). We determined the requirements for the proposed hand from a surgeon's motions in HALS. We identified four basic behaviors: "power grasp," "precision grasp," "open hand for exclusion," and "peace sign for extending peritoneum." The proposed hand had the minimum necessary DOFs for performing these behaviors, five fingers as in a human's hand, a palm that can be folded when a surgeon inserts the hand into the abdomen, and an arm for adjusting the hand's position. We evaluated the proposed hand based on a performance test and a physician's opinions, and we confirmed that it can grasp organs.

  4. Solving the robot-world, hand-eye(s) calibration problem with iterative methods

    Science.gov (United States)

    Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...

  5. Neuro-prosthetic interplay. Comment on "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands" by M. Santello et al.

    Science.gov (United States)

    Schieber, Marc H.

    2016-07-01

    Control of the human hand has been both difficult to understand scientifically and difficult to emulate technologically. The article by Santello and colleagues in the current issue of Physics of Life Reviews[1] highlights the accelerating pace of interaction between the neuroscience of controlling body movement and the engineering of robotic hands that can be used either autonomously or as part of a motor neuroprosthesis, an artificial body part that moves under control from a human subject's own nervous system. Motor neuroprostheses typically involve a brain-computer interface (BCI) that takes signals from the subject's nervous system or muscles, interprets those signals through a decoding algorithm, and then applies the resulting output to control the artificial device.

  6. RoboCon: Operator interface for robotic applications. Final report: RoboCon electrical interfacing -- system architecture, and Interfacing NDDS and LabView

    Energy Technology Data Exchange (ETDEWEB)

    Schempf, H.

    1998-04-30

    The first appendix contains detailed specifications of the electrical interfacing employed in Robocon. This includes all electrical signals and power requirement descriptions up to and including the interface entry points for external robots and systems. The reader is first presented with an overview of the overall Robocon electrical system, followed by sub-sections describing each module in detail. The appendices contain listings of power requirements and the electrical connectors and cables used, followed by an overall electrical system diagram. Custom electronics employed are also described. The Network Data Delivery Service (NDDS) is a real-time dissemination communications architecture which allows nodes on a network to publish data and subscribe to data published by other nodes while remaining anonymous. The second appendix explains how to facilitate a seamless interface between NDDS and LabView and provides sample source code used to implement an NDDS consumer which writes a string to a socket.

  7. iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

    Science.gov (United States)

    Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2013-02-01

    The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.

  8. Building a Relationship between Robot Characteristics and Teleoperation User Interfaces.

    Science.gov (United States)

    Mortimer, Michael; Horan, Ben; Seyedmahmoudian, Mehdi

    2017-03-14

    The Robot Operating System (ROS) provides roboticists with a standardized and distributed framework for real-time communication between robotic systems using a microkernel environment. This paper looks at how ROS metadata, Unified Robot Description Format (URDF), Semantic Robot Description Format (SRDF), and its message description language, can be used to identify key robot characteristics to inform User Interface (UI) design for the teleoperation of heterogeneous robot teams. Logical relationships between UI components and robot characteristics are defined by a set of relationship rules created using relevant and available information including developer expertise and ROS metadata. This provides a significant opportunity to move towards a rule-driven approach for generating the designs of teleoperation UIs; in particular the reduction of the number of different UI configurations required to teleoperate each individual robot within a heterogeneous robot team. This approach is based on using an underlying rule set identifying robots that can be teleoperated using the same UI configuration due to having the same or similar robot characteristics. Aside from reducing the number of different UI configurations an operator needs to be familiar with, this approach also supports consistency in UI configurations when a teleoperator is periodically switching between different robots. To achieve this aim, a Matlab toolbox is developed providing users with the ability to define rules specifying the relationship between robot characteristics and UI components. Once rules are defined, selections that best describe the characteristics of the robot type within a particular heterogeneous robot team can be made. A main advantage of this approach is that rather than specifying discrete robots comprising the team, the user can specify characteristics of the team more generally allowing the system to deal with slight variations that may occur in the future. In fact, by using the

  9. Building a Relationship between Robot Characteristics and Teleoperation User Interfaces

    Directory of Open Access Journals (Sweden)

    Michael Mortimer

    2017-03-01

    Full Text Available The Robot Operating System (ROS provides roboticists with a standardized and distributed framework for real-time communication between robotic systems using a microkernel environment. This paper looks at how ROS metadata, Unified Robot Description Format (URDF, Semantic Robot Description Format (SRDF, and its message description language, can be used to identify key robot characteristics to inform User Interface (UI design for the teleoperation of heterogeneous robot teams. Logical relationships between UI components and robot characteristics are defined by a set of relationship rules created using relevant and available information including developer expertise and ROS metadata. This provides a significant opportunity to move towards a rule-driven approach for generating the designs of teleoperation UIs; in particular the reduction of the number of different UI configurations required to teleoperate each individual robot within a heterogeneous robot team. This approach is based on using an underlying rule set identifying robots that can be teleoperated using the same UI configuration due to having the same or similar robot characteristics. Aside from reducing the number of different UI configurations an operator needs to be familiar with, this approach also supports consistency in UI configurations when a teleoperator is periodically switching between different robots. To achieve this aim, a Matlab toolbox is developed providing users with the ability to define rules specifying the relationship between robot characteristics and UI components. Once rules are defined, selections that best describe the characteristics of the robot type within a particular heterogeneous robot team can be made. A main advantage of this approach is that rather than specifying discrete robots comprising the team, the user can specify characteristics of the team more generally allowing the system to deal with slight variations that may occur in the future. In fact, by

  10. Neural-Network Control Of Prosthetic And Robotic Hands

    Science.gov (United States)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  11. Design of a Reconfigurable Robotic System for Flexoextension Fitted to Hand Fingers Size.

    Science.gov (United States)

    Aguilar-Pereyra, J Felipe; Castillo-Castaneda, Eduardo

    2016-01-01

    Due to the growing demand for assistance in rehabilitation therapies for hand movements, a robotic system is proposed to mobilize the hand fingers in flexion and extension exercises. The robotic system is composed by four, type slider-crank, mechanisms that have the ability to fit the user fingers length from the index to the little finger, through the adjustment of only one link for each mechanism. The trajectory developed by each mechanism corresponds to the natural flexoextension path of each finger. The amplitude of the rotations for metacarpophalangeal joint (MCP) and proximal interphalangeal joint (PIP) varies from 0 to 90° and the distal interphalangeal joint (DIP) varies from 0 to 60°; the joint rotations are coordinated naturally. The four R-RRT mechanisms orientation allows a 15° abduction movement for index, ring, and little fingers. The kinematic analysis of this mechanism was developed in order to assure that the displacement speed and smooth acceleration into the desired range of motion and the simulation results are presented. The reconfiguration of mechanisms covers about 95% of hand sizes of a group of Mexican adult population. Maximum trajectory tracking error is less than 3% in full range of movement and it can be compensated by the additional rotation of finger joints without injury to the user.

  12. Kinematic rate control of simulated robot hand at or near wrist singularity

    Science.gov (United States)

    Barker, K.; Houck, J. A.; Carzoo, S. W.

    1985-01-01

    A robot hand should obey movement commands from an operator on a computer program as closely as possible. However, when two of the three rotational axes of the robot wrist are colinear, the wrist loses a degree of freedom, and the usual resolved rate equations (used to move the hand in response to an operator's inputs) are indeterminant. Furthermore, rate limiting occurs in close vicinity to this singularity. An analysis shows that rate limiting occurs not only in the vicinity of this singularity but also substantially away from it, even when the operator commands rotational rates of the robot hand that are only a small percentage of the operational joint rate limits. Therefore, joint angle rates are scaled when they exceed operational limits in a real time simulation of a robot arm. Simulation results show that a small dead band avoids the wrist singularity in the resolved rate equations but can introduce a high frequency oscillation close to the singularity. However, when a coordinated wrist movement is used in conjunction with the resolved rate equations, the high frequency oscillation disappears.

  13. Soft robotic devices for hand rehabilitation and assistance: a narrative review.

    Science.gov (United States)

    Chu, Chia-Ye; Patterson, Rita M

    2018-02-17

    The debilitating effects on hand function from a number of a neurologic disorders has given rise to the development of rehabilitative robotic devices aimed at restoring hand function in these patients. To combat the shortcomings of previous traditional robotics, soft robotics are rapidly emerging as an alternative due to their inherent safety, less complex designs, and increased potential for portability and efficacy. While several groups have begun designing devices, there are few devices that have progressed enough to provide clinical evidence of their design's therapeutic abilities. Therefore, a global review of devices that have been previously attempted could facilitate the development of new and improved devices in the next step towards obtaining clinical proof of the rehabilitative effects of soft robotics in hand dysfunction. A literature search was performed in SportDiscus, Pubmed, Scopus, and Web of Science for articles related to the design of soft robotic devices for hand rehabilitation. A framework of the key design elements of the devices was developed to ease the comparison of the various approaches to building them. This framework includes an analysis of the trends in portability, safety features, user intent detection methods, actuation systems, total DOF, number of independent actuators, device weight, evaluation metrics, and modes of rehabilitation. In this study, a total of 62 articles representing 44 unique devices were identified and summarized according to the framework we developed to compare different design aspects. By far, the most common type of device was that which used a pneumatic actuator to guide finger flexion/extension. However, the remainder of our framework elements yielded more heterogeneous results. Consequently, those results are summarized and the advantages and disadvantages of many design choices as well as their rationales were highlighted. The past 3 years has seen a rapid increase in the development of soft robotic

  14. Feasibility study of a hand guided robotic drill for cochleostomy.

    Science.gov (United States)

    Brett, Peter; Du, Xinli; Zoka-Assadi, Masoud; Coulson, Chris; Reid, Andrew; Proops, David

    2014-01-01

    The concept of a hand guided robotic drill has been inspired by an automated, arm supported robotic drill recently applied in clinical practice to produce cochleostomies without penetrating the endosteum ready for inserting cochlear electrodes. The smart tactile sensing scheme within the drill enables precise control of the state of interaction between tissues and tools in real-time. This paper reports development studies of the hand guided robotic drill where the same consistent outcomes, augmentation of surgeon control and skill, and similar reduction of induced disturbances on the hearing organ are achieved. The device operates with differing presentation of tissues resulting from variation in anatomy and demonstrates the ability to control or avoid penetration of tissue layers as required and to respond to intended rather than involuntary motion of the surgeon operator. The advantage of hand guided over an arm supported system is that it offers flexibility in adjusting the drilling trajectory. This can be important to initiate cutting on a hard convex tissue surface without slipping and then to proceed on the desired trajectory after cutting has commenced. The results for trials on phantoms show that drill unit compliance is an important factor in the design.

  15. Ground Robotic Hand Applications for the Space Program study (GRASP)

    Science.gov (United States)

    Grissom, William A.; Rafla, Nader I. (Editor)

    1992-01-01

    This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time.

  16. Design of a variable-stiffness robotic hand using pneumatic soft rubber actuators

    International Nuclear Information System (INIS)

    Nagase, Jun-ya; Saga, Norihiko; Wakimoto, Shuichi; Satoh, Toshiyuki; Suzumori, Koichi

    2011-01-01

    In recent years, Japanese society has been ageing, engendering a labor shortage of young workers. Robots are therefore expected to be useful in performing tasks such as day-to-day support for elderly people. In particular, robots that are intended for use in the field of medical care and welfare are expected to be safe when operating in a human environment because they often come into contact with people. Furthermore, robots must perform various tasks such as regrasping, grasping of soft objects, and tasks using frictional force. Given these demands and circumstances, a tendon-driven robot hand with a stiffness changing finger has been developed. The finger surface stiffness can be altered by adjusting the input pressure depending on the task. Additionally, the coefficient of static friction can be altered by changing the surface stiffness merely by adjusting the input air pressure. This report describes the basic structure, driving mechanism, and basic properties of the proposed robot hand

  17. End-point impedance measurements across dominant and nondominant hands and robotic assistance with directional damping.

    Science.gov (United States)

    Erden, Mustafa Suphi; Billard, Aude

    2015-06-01

    The goal of this paper is to perform end-point impedance measurements across dominant and nondominant hands while doing airbrush painting and to use the results for developing a robotic assistance scheme. We study airbrush painting because it resembles in many ways manual welding, a standard industrial task. The experiments are performed with the 7 degrees of freedom KUKA lightweight robot arm. The robot is controlled in admittance using a force sensor attached at the end-point, so as to act as a free-mass and be passively guided by the human. For impedance measurements, a set of nine subjects perform 12 repetitions of airbrush painting, drawing a straight-line on a cartoon horizontally placed on a table, while passively moving the airbrush mounted on the robot's end-point. We measure hand impedance during the painting task by generating sudden and brief external forces with the robot. The results show that on average the dominant hand displays larger impedance than the nondominant in the directions perpendicular to the painting line. We find the most significant difference in the damping values in these directions. Based on this observation, we develop a "directional damping" scheme for robotic assistance and conduct a pilot study with 12 subjects to contrast airbrush painting with and without robotic assistance. Results show significant improvement in precision with both dominant and nondominant hands when using robotic assistance.

  18. Combined analysis of cortical (EEG) and nerve stump signals improves robotic hand control.

    Science.gov (United States)

    Tombini, Mario; Rigosa, Jacopo; Zappasodi, Filippo; Porcaro, Camillo; Citi, Luca; Carpaneto, Jacopo; Rossini, Paolo Maria; Micera, Silvestro

    2012-01-01

    Interfacing an amputee's upper-extremity stump nerves to control a robotic hand requires training of the individual and algorithms to process interactions between cortical and peripheral signals. To evaluate for the first time whether EEG-driven analysis of peripheral neural signals as an amputee practices could improve the classification of motor commands. Four thin-film longitudinal intrafascicular electrodes (tf-LIFEs-4) were implanted in the median and ulnar nerves of the stump in the distal upper arm for 4 weeks. Artificial intelligence classifiers were implemented to analyze LIFE signals recorded while the participant tried to perform 3 different hand and finger movements as pictures representing these tasks were randomly presented on a screen. In the final week, the participant was trained to perform the same movements with a robotic hand prosthesis through modulation of tf-LIFE-4 signals. To improve the classification performance, an event-related desynchronization/synchronization (ERD/ERS) procedure was applied to EEG data to identify the exact timing of each motor command. Real-time control of neural (motor) output was achieved by the participant. By focusing electroneurographic (ENG) signal analysis in an EEG-driven time window, movement classification performance improved. After training, the participant regained normal modulation of background rhythms for movement preparation (α/β band desynchronization) in the sensorimotor area contralateral to the missing limb. Moreover, coherence analysis found a restored α band synchronization of Rolandic area with frontal and parietal ipsilateral regions, similar to that observed in the opposite hemisphere for movement of the intact hand. Of note, phantom limb pain (PLP) resolved for several months. Combining information from both cortical (EEG) and stump nerve (ENG) signals improved the classification performance compared with tf-LIFE signals processing alone; training led to cortical reorganization and

  19. Does an intraneural interface short-term implant for robotic hand control modulate sensorimotor cortical integration? An EEG-TMS co-registration study on a human amputee.

    Science.gov (United States)

    Ferreri, F; Ponzo, D; Vollero, L; Guerra, A; Di Pino, G; Petrichella, S; Benvenuto, A; Tombini, M; Rossini, L; Denaro, L; Micera, S; Iannello, G; Guglielmelli, E; Denaro, V; Rossini, P M

    2014-01-01

    Following limb amputation, central and peripheral nervous system relays partially maintain their functions and can be exploited for interfacing prostheses. The aim of this study is to investigate, for the first time by means of an EEG-TMS co-registration study, whether and how direct bidirectional connection between brain and hand prosthesis impacts on sensorimotor cortical topography. Within an experimental protocol for robotic hand control, a 26 years-old, left-hand amputated male was selected to have implanted four intrafascicular electrodes (tf-LIFEs-4) in the median and ulnar nerves of the stump for 4 weeks. Before tf-LIFE-4s implant (T0) and after the training period, once electrodes have been removed (T1), experimental subject's cortico-cortical excitability, connectivity and plasticity were tested via a neuronavigated EEG-TMS experiment. The statistical analysis clearly demonstrated a significant modulation (with t-test p < 0.0001) of EEG activity between 30 and 100 ms post-stimulus for the stimulation of the right hemisphere. When studying individual latencies in that time range, a global amplitude modulation was found in most of the TMS-evoked potentials; particularly, the GEE analysis showed significant differences between T0 and T1 condition at 30 ms (p < 0.0404), 46 ms (p < 0.0001) and 60 ms (p < 0.007) latencies. Finally, also a clear local decrement in N46 amplitude over C4 was evident. No differences between conditions were observed for the stimulation of the left hemisphere. The results of this study confirm the hypothesis that bidirectional neural interface could redirect cortical areas -deprived of their original input/output functions- toward restorative neuroplasticity. This reorganization strongly involves bi-hemispheric networks and intracortical and transcortical modulation of GABAergic inhibition.

  20. Unsteady hydrodynamic forces acting on a robotic hand and its flow field.

    Science.gov (United States)

    Takagi, Hideki; Nakashima, Motomu; Ozaki, Takashi; Matsuuchi, Kazuo

    2013-07-26

    This study aims to clarify the mechanism of generating unsteady hydrodynamic forces acting on a hand during swimming in order to directly measure the forces, pressure distribution, and flow field around the hand by using a robotic arm and particle image velocimetry (PIV). The robotic arm consisted of the trunk, shoulder, upper arm, forearm, and hand, and it was independently computer controllable in five degrees of freedom. The elbow-joint angle of the robotic arm was fixed at 90°, and the arm was moved in semicircles around the shoulder joint in a plane perpendicular to the water surface. Two-component PIV was used for flow visualization around the hand. The data of the forces and pressure acting on the hand were sampled at 200Hz and stored on a PC. When the maximum resultant force acting on the hand was observed, a pair of counter-rotating vortices appeared on the dorsal surface of the hand. A vortex attached to the hand increased the flow velocity, which led to decreased surface pressure, increasing the hydrodynamic forces. This phenomenon is known as the unsteady mechanism of force generation. We found that the drag force was 72% greater and the lift force was 4.8 times greater than the values estimated under steady flow conditions. Therefore, it is presumable that swimmers receive the benefits of this unsteady hydrodynamic force. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Grasp force sensor for robotic hands

    Science.gov (United States)

    Scheinman, Victor D. (Inventor); Bejczy, Antal K. (Inventor); Primus, Howard C. (Inventor)

    1989-01-01

    A grasp force sensor for robotic hands is disclosed. A flexible block is located in the base of each claw through which the grasp force is exerted. The block yields minute parallelogram deflection when the claws are subjected to grasping forces. A parallelogram deflection closely resembles pure translational deflection, whereby the claws remain in substantial alignment with each other during grasping. Strain gauge transducers supply signals which provide precise knowledge of and control over grasp forces.

  2. Kinematic control of robot with degenerate wrist

    Science.gov (United States)

    Barker, L. K.; Moore, M. C.

    1984-01-01

    Kinematic resolved rate equations allow an operator with visual feedback to dynamically control a robot hand. When the robot wrist is degenerate, the computed joint angle rates exceed operational limits, and unwanted hand movements can result. The generalized matrix inverse solution can also produce unwanted responses. A method is introduced to control the robot hand in the region of the degenerate robot wrist. The method uses a coordinated movement of the first and third joints of the robot wrist to locate the second wrist joint axis for movement of the robot hand in the commanded direction. The method does not entail infinite joint angle rates.

  3. Hand-Assisted Robotic Surgery for Staging of Ovarian Cancer and Uterine Cancers With High Risk of Peritoneal Spread: A Retrospective Cohort Study.

    Science.gov (United States)

    Fornalik, Hubert; Brooks, Hannah; Moore, Elizabeth S; Flanders, Nicole L; Callahan, Michael J; Sutton, Gregory P

    2015-10-01

    This study aimed to determine surgical outcomes related to hand-assisted robotic surgery (HARS) for staging of ovarian cancer and uterine cancers with high risk of peritoneal spread and compare them to laparotomy and standard robotic-assisted surgery. A retrospective cohort study of women undergoing staging for uterine and ovarian cancer between January 2011 and July 2013 at a major metropolitan teaching hospital was reviewed. Patients undergoing HARS were matched with patients undergoing staging laparotomy [exploratory laparotomy (XLAP)] for the same indications and with patients undergoing traditional robotic surgery (RS) for staging of endometrioid endometrial cancer. In HARS, a longer incision is used to allow palpation of the peritoneal surfaces, to exteriorize the small bowel, to examine the mesentery, and to perform omentectomy. One hundred five patients were analyzed (15 HARS, 45 RS, 45 XLAP). Compared with XLAP, HARS was associated with decreased blood loss (200 vs 400 mL, P = 0.011) and shorter hospital stay (1 vs 4 days, P < 0.001). Patients who had undergone HARS had fewer major complications, but those results did not reach statistical significance (0% vs 27%, P = 0.063). Hand-assisted robotic surgery was associated with higher blood loss and length of stay as compared to robotic staging of endometrioid endometrial cancer (RS). Minor wound complications were also more common (27% vs 2%, P = 0.012). Hand-assisted robotic surgery allows for thorough visual and tactile assessment of peritoneal surfaces. It represents a safe alternative to laparotomy for staging of ovarian and uterine cancers with high risk of peritoneal spread. Long-term follow-up study is needed to determine oncologic adequacy of HARS.

  4. Feasibility Study of a Hand Guided Robotic Drill for Cochleostomy

    Directory of Open Access Journals (Sweden)

    Peter Brett

    2014-01-01

    Full Text Available The concept of a hand guided robotic drill has been inspired by an automated, arm supported robotic drill recently applied in clinical practice to produce cochleostomies without penetrating the endosteum ready for inserting cochlear electrodes. The smart tactile sensing scheme within the drill enables precise control of the state of interaction between tissues and tools in real-time. This paper reports development studies of the hand guided robotic drill where the same consistent outcomes, augmentation of surgeon control and skill, and similar reduction of induced disturbances on the hearing organ are achieved. The device operates with differing presentation of tissues resulting from variation in anatomy and demonstrates the ability to control or avoid penetration of tissue layers as required and to respond to intended rather than involuntary motion of the surgeon operator. The advantage of hand guided over an arm supported system is that it offers flexibility in adjusting the drilling trajectory. This can be important to initiate cutting on a hard convex tissue surface without slipping and then to proceed on the desired trajectory after cutting has commenced. The results for trials on phantoms show that drill unit compliance is an important factor in the design.

  5. Hand-held transendoscopic robotic manipulators: A transurethral laser prostate surgery case study.

    Science.gov (United States)

    Hendrick, Richard J; Mitchell, Christopher R; Herrell, S Duke; Webster, Robert J

    2015-11-01

    Natural orifice endoscopic surgery can enable incisionless approaches, but a major challenge is the lack of small and dexterous instrumentation. Surgical robots have the potential to meet this need yet often disrupt the clinical workflow. Hand-held robots that combine thin manipulators and endoscopes have the potential to address this by integrating seamlessly into the clinical workflow and enhancing dexterity. As a case study illustrating the potential of this approach, we describe a hand-held robotic system that passes two concentric tube manipulators through a 5 mm port in a rigid endoscope for transurethral laser prostate surgery. This system is intended to catalyze the use of a clinically superior, yet rarely attempted, procedure for benign prostatic hyperplasia. This paper describes system design and experiments to evaluate the surgeon's functional workspace and accuracy using the robot. Phantom and cadaver experiments demonstrate successful completion of the target procedure via prostate lobe resection.

  6. Development of an evaluation function for eye-hand coordination robotic therapy.

    Science.gov (United States)

    Pernalete, N; Tang, F; Chang, S M; Cheng, F Y; Vetter, P; Stegemann, M; Grantner, J

    2011-01-01

    This paper is the continuation of a work presented at ICORR 07, in which we discussed the possibility of improving eye-hand coordination in children diagnosed with this problem, using a robotic mapping from a haptic user interface to a virtual environment. Our goal is to develop, implement and refine a system that will assess and improve the eye-hand coordination and grip strength in children diagnosed with poor graphomotor skills. A detailed analysis of patters (e.g., labyrinths, letters and angles) was conducted in order to select three very distinguishable levels of difficulty that could be included in the system, and which would yield the greatest benefit in terms of assessment of coordination and strength issues as well as in training. Support algorithms (position, force, velocity, inertia and viscosity) were also developed and incorporated into the tasks in order to introduce general computer assistance to the mapping of the user's movements to the computer screen without overriding the user's commands to the robotic device. In order to evaluate performance (given by %accuracy and time) of the executed tasks, a sophisticated evaluation function was designed based on image analysis and edge detection algorithms. This paper presents the development of the haptic tasks, the various assistance algorithms, the description of the evaluation function and the results of a study implemented at the Motor Development Clinic at Cal Poly Pomona. The results (Accuracy and Time) of this function are currently being used as inputs to an Intelligent Decision Support System (described in), which in turn, suggests the next task to be executed by the subject based on his/her performance. © 2011 IEEE

  7. Compensating Hand Function in Chronic Stroke Patients Through the Robotic Sixth Finger.

    Science.gov (United States)

    Salvietti, Gionata; Hussain, Irfan; Cioncoloni, David; Taddei, Sabrina; Rossi, Simone; Prattichizzo, Domenico

    2017-02-01

    A novel solution to compensate hand grasping abilities is proposed for chronic stroke patients. The goal is to provide the patients with a wearable robotic extra-finger that can be worn on the paretic forearm by means of an elastic band. The proposed prototype, the Robotic Sixth Finger, is a modular articulated device that can adapt its structure to the grasped object shape. The extra-finger and the paretic hand act like the two parts of a gripper cooperatively holding an object. We evaluated the feasibility of the approach with four chronic stroke patients performing a qualitative test, the Frenchay Arm Test. In this proof of concept study, the use of the Robotic Sixth Finger has increased the total score of the patients by two points in a five points scale. The subjects were able to perform the two grasping tasks included in the test that were not possible without the robotic extra-finger. Adding a robotic opposing finger is a very promising approach that can significantly improve the functional compensation of the chronic stroke patient during everyday life activities.

  8. Surgeon Design Interface for Patient-Specific Concentric Tube Robots.

    Science.gov (United States)

    Morimoto, Tania K; Greer, Joseph D; Hsieh, Michael H; Okamura, Allison M

    2016-06-01

    Concentric tube robots have potential for use in a wide variety of surgical procedures due to their small size, dexterity, and ability to move in highly curved paths. Unlike most existing clinical robots, the design of these robots can be developed and manufactured on a patient- and procedure-specific basis. The design of concentric tube robots typically requires significant computation and optimization, and it remains unclear how the surgeon should be involved. We propose to use a virtual reality-based design environment for surgeons to easily and intuitively visualize and design a set of concentric tube robots for a specific patient and procedure. In this paper, we describe a novel patient-specific design process in the context of the virtual reality interface. We also show a resulting concentric tube robot design, created by a pediatric urologist to access a kidney stone in a pediatric patient.

  9. Design of a Reconfigurable Robotic System for Flexoextension Fitted to Hand Fingers Size

    Directory of Open Access Journals (Sweden)

    J. Felipe Aguilar-Pereyra

    2016-01-01

    Full Text Available Due to the growing demand for assistance in rehabilitation therapies for hand movements, a robotic system is proposed to mobilize the hand fingers in flexion and extension exercises. The robotic system is composed by four, type slider-crank, mechanisms that have the ability to fit the user fingers length from the index to the little finger, through the adjustment of only one link for each mechanism. The trajectory developed by each mechanism corresponds to the natural flexoextension path of each finger. The amplitude of the rotations for metacarpophalangeal joint (MCP and proximal interphalangeal joint (PIP varies from 0 to 90° and the distal interphalangeal joint (DIP varies from 0 to 60°; the joint rotations are coordinated naturally. The four R-RRT mechanisms orientation allows a 15° abduction movement for index, ring, and little fingers. The kinematic analysis of this mechanism was developed in order to assure that the displacement speed and smooth acceleration into the desired range of motion and the simulation results are presented. The reconfiguration of mechanisms covers about 95% of hand sizes of a group of Mexican adult population. Maximum trajectory tracking error is less than 3% in full range of movement and it can be compensated by the additional rotation of finger joints without injury to the user.

  10. A Wearable-Based and Markerless Human-Manipulator Interface with Feedback Mechanism and Kalman Filters

    Directory of Open Access Journals (Sweden)

    Ping Zhang

    2015-11-01

    Full Text Available The objective of this paper is to develop a novel human-manipulator interface which incorporates wearable-based and markerless tracking to interact with the continuous movements of a human operator's hand. Unlike traditional approaches, which usually include contacting devices or physical markers to track the human-limb movements, this interface enables registration of natural movement through a wireless wearable watch and a leap motion sensor. Due to sensor error and tracking failure, the measurements are not made with sufficient accuracy. Two Kalman filters are employed to compensate the noisy and incomplete measurements in real time. Furthermore, due to perceptive limitations and abnormal state signals, the operator is unable to achieve high precision and efficiency in robot manipulation; an adaptive multispace transformation method (AMT is therefore introduced, which serves as a secondary treatment. In addition, in order to allow two-way human-robot interaction, the proposed method provides a vibration feedback mechanism triggered by the wearable watch to call the operator's attention to robot collision incidents or moments where the operator's hand is in a transboundary state. This improves teleoperation.

  11. A Magnetic Resonance Compatible Soft Wearable Robotic Glove for Hand Rehabilitation and Brain Imaging.

    Science.gov (United States)

    Hong Kai Yap; Kamaldin, Nazir; Jeong Hoon Lim; Nasrallah, Fatima A; Goh, James Cho Hong; Chen-Hua Yeow

    2017-06-01

    In this paper, we present the design, fabrication and evaluation of a soft wearable robotic glove, which can be used with functional Magnetic Resonance imaging (fMRI) during the hand rehabilitation and task specific training. The soft wearable robotic glove, called MR-Glove, consists of two major components: a) a set of soft pneumatic actuators and b) a glove. The soft pneumatic actuators, which are made of silicone elastomers, generate bending motion and actuate finger joints upon pressurization. The device is MR-compatible as it contains no ferromagnetic materials and operates pneumatically. Our results show that the device did not cause artifacts to fMRI images during hand rehabilitation and task-specific exercises. This study demonstrated the possibility of using fMRI and MR-compatible soft wearable robotic device to study brain activities and motor performances during hand rehabilitation, and to unravel the functional effects of rehabilitation robotics on brain stimulation.

  12. Advanced Myoelectric Control for Robotic Hand-Assisted Training: Outcome from a Stroke Patient.

    Science.gov (United States)

    Lu, Zhiyuan; Tong, Kai-Yu; Shin, Henry; Li, Sheng; Zhou, Ping

    2017-01-01

    A hand exoskeleton driven by myoelectric pattern recognition was designed for stroke rehabilitation. It detects and recognizes the user's motion intent based on electromyography (EMG) signals, and then helps the user to accomplish hand motions in real time. The hand exoskeleton can perform six kinds of motions, including the whole hand closing/opening, tripod pinch/opening, and the "gun" sign/opening. A 52-year-old woman, 8 months after stroke, made 20× 2-h visits over 10 weeks to participate in robot-assisted hand training. Though she was unable to move her fingers on her right hand before the training, EMG activities could be detected on her right forearm. In each visit, she took 4× 10-min robot-assisted training sessions, in which she repeated the aforementioned six motion patterns assisted by our intent-driven hand exoskeleton. After the training, her grip force increased from 1.5 to 2.7 kg, her pinch force increased from 1.5 to 2.5 kg, her score of Box and Block test increased from 3 to 7, her score of Fugl-Meyer (Part C) increased from 0 to 7, and her hand function increased from Stage 1 to Stage 2 in Chedoke-McMaster assessment. The results demonstrate the feasibility of robot-assisted training driven by myoelectric pattern recognition after stroke.

  13. Finger-Shaped GelForce: Sensor for Measuring Surface Traction Fields for Robotic Hand.

    Science.gov (United States)

    Sato, K; Kamiyama, K; Kawakami, N; Tachi, S

    2010-01-01

    It is believed that the use of haptic sensors to measure the magnitude, direction, and distribution of a force will enable a robotic hand to perform dexterous operations. Therefore, we develop a new type of finger-shaped haptic sensor using GelForce technology. GelForce is a vision-based sensor that can be used to measure the distribution of force vectors, or surface traction fields. The simple structure of the GelForce enables us to develop a compact finger-shaped GelForce for the robotic hand. GelForce that is developed on the basis of an elastic theory can be used to calculate surface traction fields using a conversion equation. However, this conversion equation cannot be analytically solved when the elastic body of the sensor has a complicated shape such as the shape of a finger. Therefore, we propose an observational method and construct a prototype of the finger-shaped GelForce. By using this prototype, we evaluate the basic performance of the finger-shaped GelForce. Then, we conduct a field test by performing grasping operations using a robotic hand. The results of this test show that using the observational method, the finger-shaped GelForce can be successfully used in a robotic hand.

  14. Human-machine interfaces based on EMG and EEG applied to robotic systems

    Directory of Open Access Journals (Sweden)

    Sarcinelli-Filho Mario

    2008-03-01

    Full Text Available Abstract Background Two different Human-Machine Interfaces (HMIs were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well. Results Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively. Conclusion Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.

  15. An Interactive Astronaut-Robot System with Gesture Control

    Directory of Open Access Journals (Sweden)

    Jinguo Liu

    2016-01-01

    Full Text Available Human-robot interaction (HRI plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM is employed to recognize hand gestures and particle swarm optimization (PSO algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL have been selected and used to test and validate the performance of the proposed system.

  16. Evolution of robotic nephrectomy for living donation: from hand-assisted to totally robotic technique.

    Science.gov (United States)

    Giacomoni, Alessandro; Di Sandro, Stefano; Lauterio, Andrea; Concone, Giacomo; Mangoni, Iacopo; Mihaylov, Plamen; Tripepi, Matteo; De Carlis, Luciano

    2014-09-01

    The application of robotic-assisted surgery offers EndoWrist instruments and 3-D visualization of the operative field, which are improvements over traditional laparoscopy. The results of the few studies published so far have shown that living donor nephrectomy using the robot-assisted technique is safe, feasible, and offers advantages to patients. Since November 2009, 16 patients have undergone robotic-assisted living donor nephrectomy at our Institute. Patients were divided into two groups according to the surgical technique adopted for the procedure: Group A, hand-assisted robotic nephrectomy (eight patients); Group B, totally robotic nephrectomy (eight patients). Intra-operative bleeding was similar in the two groups (90 vs 100 mL for Group A and B, respectively). Median warm ischemia time was significantly shorter in Group A (2.3 vs 5.1 min for Group A and B, respectively, P-value = 0.05). Switching to the open procedure was never required. Median operative time was not significantly longer in Group A than Group B (275 min vs 250 min, respectively). Robotic assisted living kidney recovery is a safe and effective procedure. Considering the overall technical, clinical, and feasibility aspects of living kidney donation, we believe that the robotic assisted technique is the method of choice for surgeon's comfort and donors' safety. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Modeling and evaluation of hand-eye coordination of surgical robotic system on task performance.

    Science.gov (United States)

    Gao, Yuanqian; Wang, Shuxin; Li, Jianmin; Li, Aimin; Liu, Hongbin; Xing, Yuan

    2017-12-01

    Robotic-assisted minimally invasive surgery changes the direct hand and eye coordination in traditional surgery to indirect instrument and camera coordination, which affects the ergonomics, operation performance, and safety. A camera, two instruments, and a target, as the descriptors, are used to construct the workspace correspondence and geometrical relationships in a surgical operation. A parametric model with a set of parameters is proposed to describe the hand-eye coordination of the surgical robot. From the results, optimal values and acceptable ranges of these parameters are identified from two tasks. A 90° viewing angle had the longest completion time; 60° instrument elevation angle and 0° deflection angle had better performance; there is no significant difference among manipulation angles and observing distances on task performance. This hand-eye coordination model provides evidence for robotic design, surgeon training, and robotic initialization to achieve dexterous and safe manipulation in surgery. Copyright © 2017 John Wiley & Sons, Ltd.

  18. New trends in medical and service robots human centered analysis, control and design

    CERN Document Server

    Chevallereau, Christine; Pisla, Doina; Bleuler, Hannes; Rodić, Aleksandar

    2016-01-01

    Medical and service robotics integrates several disciplines and technologies such as mechanisms, mechatronics, biomechanics, humanoid robotics, exoskeletons, and anthropomorphic hands. This book presents the most recent advances in medical and service robotics, with a stress on human aspects. It collects the selected peer-reviewed papers of the Fourth International Workshop on Medical and Service Robots, held in Nantes, France in 2015, covering topics on: exoskeletons, anthropomorphic hands, therapeutic robots and rehabilitation, cognitive robots, humanoid and service robots, assistive robots and elderly assistance, surgical robots, human-robot interfaces, BMI and BCI, haptic devices and design for medical and assistive robotics. This book offers a valuable addition to existing literature.

  19. Dynamics, control and sensor issues pertinent to robotic hands for the EVA retriever system

    Science.gov (United States)

    Mclauchlan, Robert A.

    1987-01-01

    Basic dynamics, sensor, control, and related artificial intelligence issues pertinent to smart robotic hands for the Extra Vehicular Activity (EVA) Retriever system are summarized and discussed. These smart hands are to be used as end effectors on arms attached to manned maneuvering units (MMU). The Retriever robotic systems comprised of MMU, arm and smart hands, are being developed to aid crewmen in the performance of routine EVA tasks including tool and object retrieval. The ultimate goal is to enhance the effectiveness of EVA crewmen.

  20. Fully embedded myoelectric control for a wearable robotic hand orthosis.

    Science.gov (United States)

    Ryser, Franziska; Butzer, Tobias; Held, Jeremia P; Lambercy, Olivier; Gassert, Roger

    2017-07-01

    To prevent learned non-use of the affected hand in chronic stroke survivors, rehabilitative training should be continued after discharge from the hospital. Robotic hand orthoses are a promising approach for home rehabilitation. When combined with intuitive control based on electromyography, the therapy outcome can be improved. However, such systems often require extensive cabling, experience in electrode placement and connection to external computers. This paper presents the framework for a stand-alone, fully wearable and real-time myoelectric intention detection system based on the Myo armband. The hard and software for real-time gesture classification were developed and combined with a routine to train and customize the classifier, leading to a unique ease of use. The system including training of the classifier can be set up within less than one minute. Results demonstrated that: (1) the proposed algorithm can classify five gestures with an accuracy of 98%, (2) the final system can online classify three gestures with an accuracy of 94.3% and, in a preliminary test, (3) classify three gestures from data acquired from mildly to severely impaired stroke survivors with an accuracy of over 78.8%. These results highlight the potential of the presented system for electromyography-based intention detection for stroke survivors and, with the integration of the system into a robotic hand orthosis, the potential for a wearable platform for all day robot-assisted home rehabilitation.

  1. Kinect technology for hand tracking control of surgical robots: technical and surgical skill comparison to current robotic masters.

    Science.gov (United States)

    Kim, Yonjae; Leonard, Simon; Shademan, Azad; Krieger, Axel; Kim, Peter C W

    2014-06-01

    Current surgical robots are controlled by a mechanical master located away from the patient, tracking surgeon's hands by wire and pulleys or mechanical linkage. Contactless hand tracking for surgical robot control is an attractive alternative, because it can be executed with minimal footprint at the patient's bedside without impairing sterility, while eliminating current disassociation between surgeon and patient. We compared technical and technologic feasibility of contactless hand tracking to the current clinical standard master controllers. A hand-tracking system (Kinect™-based 3Gear), a wire-based mechanical master (Mantis Duo), and a clinical mechanical linkage master (da Vinci) were evaluated for technical parameters with strong clinical relevance: system latency, static noise, robot slave tremor, and controller range. Five experienced surgeons performed a skill comparison study, evaluating the three different master controllers for efficiency and accuracy in peg transfer and pointing tasks. da Vinci had the lowest latency of 89 ms, followed by Mantis with 374 ms and 3Gear with 576 ms. Mantis and da Vinci produced zero static error. 3Gear produced average static error of 0.49 mm. The tremor of the robot used by the 3Gear and Mantis system had a radius of 1.7 mm compared with 0.5 mm for da Vinci. The three master controllers all had similar range. The surgeons took 1.98 times longer to complete the peg transfer task with the 3Gear system compared with Mantis, and 2.72 times longer with Mantis compared with da Vinci (p value 2.1e-9). For the pointer task, surgeons were most accurate with da Vinci with average error of 0.72 mm compared with Mantis's 1.61 mm and 3Gear's 2.41 mm (p value 0.00078). Contactless hand-tracking technology as a surgical master can execute simple surgical tasks. Whereas traditional master controllers outperformed, given that contactless hand-tracking is a first-generation technology, clinical potential is promising and could

  2. Motion control for a walking companion robot with a novel human–robot interface

    Directory of Open Access Journals (Sweden)

    Yunqi Lv

    2016-09-01

    Full Text Available A walking companion robot is presented for rehabilitation from dyskinesia of lower limbs in this article. A new human–robot interface (HRI is designed which adopts one-axis force sensor and potentiometer connector to detect the motion of the user. To accompany in displacement and angle between the user and the robot precisely in real time, the common motions are classified into two elemental motion states. With distinction method of motion states, a classification scheme of motion control is adopted. The mathematical model-based control method is first introduced and the corresponding control systems are built. Due to the unavoidable deviation of the mathematical model-based control method, a force control method is proposed and the corresponding control systems are built. The corresponding simulations demonstrate that the efficiency of the two proposed control methods. The experimental data and paths of robot verify the two control methods and indicate that the force control method can better satisfy the user’s requirements.

  3. Brain computer interface for operating a robot

    Science.gov (United States)

    Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed

    2013-10-01

    A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.

  4. Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction.

    Science.gov (United States)

    Roldán, Juan Jesús; Peña-Tapia, Elena; Martín-Barrio, Andrés; Olivares-Méndez, Miguel A; Del Cerro, Jaime; Barrientos, Antonio

    2017-07-27

    Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation.

  5. Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction

    Science.gov (United States)

    Peña-Tapia, Elena; Martín-Barrio, Andrés; Olivares-Méndez, Miguel A.

    2017-01-01

    Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation. PMID:28749407

  6. Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction

    Directory of Open Access Journals (Sweden)

    Juan Jesús Roldán

    2017-07-01

    Full Text Available Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation.

  7. Shape-estimation of human hand using polymer flex sensor and study of its application to control robot arm

    International Nuclear Information System (INIS)

    Lee, Jin Hyuck; Kim, Dae Hyun

    2015-01-01

    Ultrasonic inspection robot systems have been widely researched and developed for the real-time monitoring of structures such as power plants. However, an inspection robot that is operated in a simple pattern has limitations in its application to various structures in a plant facility because of the diverse and complicated shapes of the inspection objects. Therefore, accurate control of the robot is required to inspect complicated objects with high-precision results. This paper presents the idea that the shape and movement information of an ultrasonic inspector's hand could be profitably utilized for the accurate control of robot. In this study, a polymer flex sensor was applied to monitor the shape of a human hand. This application was designed to intuitively control an ultrasonic inspection robot. The movement and shape of the hand were estimated by applying multiple sensors. Moreover, it was successfully shown that a test robot could be intuitively controlled based on the shape of a human hand estimated using polymer flex sensors.

  8. Shape-estimation of human hand using polymer flex sensor and study of its application to control robot arm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin Hyuck; Kim, Dae Hyun [Seoul National University of Technology, Seoul (Korea, Republic of)

    2015-02-15

    Ultrasonic inspection robot systems have been widely researched and developed for the real-time monitoring of structures such as power plants. However, an inspection robot that is operated in a simple pattern has limitations in its application to various structures in a plant facility because of the diverse and complicated shapes of the inspection objects. Therefore, accurate control of the robot is required to inspect complicated objects with high-precision results. This paper presents the idea that the shape and movement information of an ultrasonic inspector's hand could be profitably utilized for the accurate control of robot. In this study, a polymer flex sensor was applied to monitor the shape of a human hand. This application was designed to intuitively control an ultrasonic inspection robot. The movement and shape of the hand were estimated by applying multiple sensors. Moreover, it was successfully shown that a test robot could be intuitively controlled based on the shape of a human hand estimated using polymer flex sensors.

  9. Calculator-Controlled Robots: Hands-On Mathematics and Science Discovery

    Science.gov (United States)

    Tuchscherer, Tyson

    2010-01-01

    The Calculator Controlled Robots activities are designed to engage students in hands-on inquiry-based missions. These activities address National science and technology standards, as well as specifically focusing on mathematics content and process standards. There are ten missions and three exploration extensions that provide activities for up to…

  10. Analyzing Robotic Kinematics Via Computed Simulations

    Science.gov (United States)

    Carnahan, Timothy M.

    1992-01-01

    Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.

  11. Robotic Hand-Assisted Training for Spinal Cord Injury Driven by Myoelectric Pattern Recognition: A Case Report.

    Science.gov (United States)

    Lu, Zhiyuan; Tong, Kai-Yu; Shin, Henry; Stampas, Argyrios; Zhou, Ping

    2017-10-01

    A 51-year-old man with an incomplete C6 spinal cord injury sustained 26 yrs ago attended twenty 2-hr visits over 10 wks for robot-assisted hand training driven by myoelectric pattern recognition. In each visit, his right hand was assisted to perform motions by an exoskeleton robot, while the robot was triggered by his own motion intentions. The hand robot was designed for this study, which can perform six kinds of motions, including hand closing/opening; thumb, index finger, and middle finger closing/opening; and middle, ring, and little fingers closing/opening. After the training, his grip force increased from 13.5 to 19.6 kg, his pinch force remained the same (5.0 kg), his score of Box and Block test increased from 32 to 39, and his score from the Graded Redefined Assessment of Strength, Sensibility, and Prehension test Part 4.B increased from 22 to 24. He accomplished the tasks in the Graded Redefined Assessment of Strength, Sensibility, and Prehension test Part 4.B 28.8% faster on average. The results demonstrate the feasibility and effectiveness of robot-assisted training driven by myoelectric pattern recognition after spinal cord injury.

  12. Space robotics--DLR's telerobotic concepts, lightweight arms and articulated hands.

    Science.gov (United States)

    Hirzinger, G; Brunner, B; Landzettel, K; Sporer, N; Butterfass, J; Schedl, M

    2003-01-01

    The paper briefly outlines DLR's experience with real space robot missions (ROTEX and ETS VII). It then discusses forthcoming projects, e.g., free-flying systems in low or geostationary orbit and robot systems around the space station ISS, where the telerobotic system MARCO might represent a common baseline. Finally it describes our efforts in developing a new generation of "mechatronic" ultra-light weight arms with multifingered hands. The third arm generation is operable now (approaching present-day technical limits). In a similar way DLR's four-fingered hand II was a big step towards higher reliability and yet better performance. Artificial robonauts for space are a central goal now for the Europeans as well as for NASA, and the first verification tests of DLR's joint components are supposed to fly already end of 93 on the space station.

  13. Volunteers Oriented Interface Design for the Remote Navigation of Rescue Robots at Large-Scale Disaster Sites

    Science.gov (United States)

    Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi

    This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.

  14. Real-Time Control of an Exoskeleton Hand Robot with Myoelectric Pattern Recognition.

    Science.gov (United States)

    Lu, Zhiyuan; Chen, Xiang; Zhang, Xu; Tong, Kay-Yu; Zhou, Ping

    2017-08-01

    Robot-assisted training provides an effective approach to neurological injury rehabilitation. To meet the challenge of hand rehabilitation after neurological injuries, this study presents an advanced myoelectric pattern recognition scheme for real-time intention-driven control of a hand exoskeleton. The developed scheme detects and recognizes user's intention of six different hand motions using four channels of surface electromyography (EMG) signals acquired from the forearm and hand muscles, and then drives the exoskeleton to assist the user accomplish the intended motion. The system was tested with eight neurologically intact subjects and two individuals with spinal cord injury (SCI). The overall control accuracy was [Formula: see text] for the neurologically intact subjects and [Formula: see text] for the SCI subjects. The total lag of the system was approximately 250[Formula: see text]ms including data acquisition, transmission and processing. One SCI subject also participated in training sessions in his second and third visits. Both the control accuracy and efficiency tended to improve. These results show great potential for applying the advanced myoelectric pattern recognition control of the wearable robotic hand system toward improving hand function after neurological injuries.

  15. A robot hand testbed designed for enhancing embodiment and functional neurorehabilitation of body schema in subjects with upper limb impairment or loss.

    Science.gov (United States)

    Hellman, Randall B; Chang, Eric; Tanner, Justin; Helms Tillery, Stephen I; Santos, Veronica J

    2015-01-01

    Many upper limb amputees experience an incessant, post-amputation "phantom limb pain" and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech "rubber hand" illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the "BairClaw" presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger-object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden.

  16. A robot hand testbed designed for enhancing embodiment and functional neurorehabilitation of body schema in subjects with upper limb impairment or loss

    Directory of Open Access Journals (Sweden)

    Randall B. Hellman

    2015-02-01

    Full Text Available Many upper limb amputees experience an incessant, post-amputation phantom limb pain and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF, rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech rubber hand illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the BairClaw presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger-object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced

  17. An Inexpensive Method for Kinematic Calibration of a Parallel Robot by Using One Hand-Held Camera as Main Sensor

    Directory of Open Access Journals (Sweden)

    Ricardo Carelli

    2013-08-01

    Full Text Available This paper presents a novel method for the calibration of a parallel robot, which allows a more accurate configuration instead of a configuration based on nominal parameters. It is used, as the main sensor with one camera installed in the robot hand that determines the relative position of the robot with respect to a spherical object fixed in the working area of the robot. The positions of the end effector are related to the incremental positions of resolvers of the robot motors. A kinematic model of the robot is used to find a new group of parameters, which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and thereby improve spatial measurements. Finally, several working tests, static and tracking tests are executed in order to verify how the robotic system behaviour improves by using calibrated parameters against nominal parameters. In order to emphasize that, this proposed new method uses neither external nor expensive sensor. That is why new robots are useful in teaching and research activities.

  18. Multimodal interaction for human-robot teams

    Science.gov (United States)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  19. Clinical effects of using HEXORR (Hand Exoskeleton Rehabilitation Robot) for movement therapy in stroke rehabilitation.

    Science.gov (United States)

    Godfrey, Sasha Blue; Holley, Rahsaan J; Lum, Peter S

    2013-11-01

    The goals of this pilot study were to quantify the clinical benefits of using the Hand Exoskeleton Rehabilitation Robot for hand rehabilitation after stroke and to determine the population best served by this intervention. Nine subjects with chronic stroke (one excluded from analysis) completed 18 sessions of training with the Hand Exoskeleton Rehabilitation Robot and a preevaluation, a postevaluation, and a 90-day clinical evaluation. Overall, the subjects improved in both range of motion and clinical measures. Compared with the preevaluation, the subjects showed significant improvements in range of motion, grip strength, and the hand component of the Fugl-Meyer (mean changes, 6.60 degrees, 8.84 percentage points, and 1.86 points, respectively). A subgroup of six subjects exhibited lower tone and received a higher dosage of training. These subjects had significant gains in grip strength, the hand component of the Fugl-Meyer, and the Action Research Arm Test (mean changes, 8.42 percentage points, 2.17 points, and 2.33 points, respectively). Future work is needed to better manage higher levels of hypertonia and provide more support to subjects with higher impairment levels; however, the current results support further study into the Hand Exoskeleton Rehabilitation Robot treatment.

  20. Variable Thumb Moment Arm Modeling and Thumb-Tip Force Production of a Human-Like Robotic Hand.

    Science.gov (United States)

    Niehues, Taylor D; Deshpande, Ashish D

    2017-10-01

    The anatomically correct testbed (ACT) hand mechanically simulates the musculoskeletal structure of the fingers and thumb of the human hand. In this work, we analyze the muscle moment arms (MAs) and thumb-tip force vectors in the ACT thumb in order to compare the ACT thumb's mechanical structure to the human thumb. Motion data are used to determine joint angle-dependent MA models, and thumb-tip three-dimensional (3D) force vectors are experimentally analyzed when forces are applied to individual muscles. Results are presented for both a nominal ACT thumb model designed to match human MAs and an adjusted model that more closely replicates human-like thumb-tip forces. The results confirm that the ACT thumb is capable of faithfully representing human musculoskeletal structure and muscle functionality. Using the ACT hand as a physical simulation platform allows us to gain a better understanding of the underlying biomechanical and neuromuscular properties of the human hand to ultimately inform the design and control of robotic and prosthetic hands.

  1. Analysis of Inverse Kinamtics of an Anthropomorphic Robotic hand

    Directory of Open Access Journals (Sweden)

    Pramod Kumar Parida

    2013-03-01

    Full Text Available In this paper, a new method for solving the inverse kinematics of the fingers of an anthropomorphic hand is proposed. Solution of inverse kinematic equations is a complex problem, the complexity comes from the nonlinearity of joint space and Cartesian space mapping and having multiple solutions.This is a typical problem in robotics that needs to be solved to control the fingers of an anthropomorphic robotic hand to perform tasks it is designated to do. With more complex structures operating in a 3-dimensional space deducing a mathematical soluation for the inverse kinematics may prove challenging. In this paper, using the ability of ANFIS (Adaptive Neuro-Fuzzy Inference System to learn from training data, it is possible to create ANFIS network, an implementation of a representative fuzzy inference system using ANFIS structure, with limited mathematical representation of the system. The main advantages of this method with respect to the other methods are implementation is easy, very fast and shorter computation time and better response with acceptable error.

  2. Concept development of a tendon arm manipulator and anthropomorphic robotic hand

    Science.gov (United States)

    Tolman, C. T.

    1987-01-01

    AMETEK/ORED inhouse research and development efforts leading toward a next-generation robotic manipulator arm and end-effector technology is summarized. Manipulator arm development has been directed toward a multiple-degree-of-freedom, flexible, tendon-driven concept referred to here as a Tendon Arm Manipulator (TAM). End-effector development has been directed toward a three-fingered, dextrous, tendon-driven, anthropomorphic configuration which is referred to as an Anthropomorphic Robotic Hand (ARH). Key technology issues are identified for both concepts.

  3. Physiological and subjective evaluation of a human-robot object hand-over task.

    Science.gov (United States)

    Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël

    2011-11-01

    In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.

  4. Fine finger motor skill training with exoskeleton robotic hand in chronic stroke: stroke rehabilitation.

    Science.gov (United States)

    Ockenfeld, Corinna; Tong, Raymond K Y; Susanto, Evan A; Ho, Sze-Kit; Hu, Xiao-ling

    2013-06-01

    Background and Purpose. Stroke survivors often show a limited recovery in the hand function to perform delicate motions, such as full hand grasping, finger pinching and individual finger movement. The purpose of this study is to describe the implementation of an exoskeleton robotic hand together with fine finger motor skill training on 2 chronic stroke patients. Case Descriptions. Two post-stroke patients participated in a 20-session training program by integrating 10 minutes physical therapy, 20 minutes robotic hand training and 15 minutes functional training tasks with delicate objects(card, pen and coin). These two patients (A and B) had cerebrovascular accident at 6 months and 11 months respectively when enrolled in this study. Outcomes. The results showed that both patients had improvements in Fugl-Meyer assessment (FM), Action Research Arm Test (ARAT). Patients had better isolation of the individual finger flexion and extension based on the reduced muscle co-contraction from the electromyographic(EMG) signals and finger extension force after 20 sessions of training. Discussion. This preliminary study showed that by focusing on the fine finger motor skills together with the exoskeleton robotic hand, it could improve the motor recovery of the upper extremity in the fingers and hand function, which were showed in the ARAT. Future randomized controlled trials are needed to evaluate the clinical effectiveness.

  5. Control of a Supernumerary Robotic Hand by Foot: An Experimental Study in Virtual Reality.

    Science.gov (United States)

    Abdi, Elahe; Burdet, Etienne; Bouri, Mohamed; Bleuler, Hannes

    2015-01-01

    In the operational theater, the surgical team could highly benefit from a robotic supplementary hand under the surgeon's full control. The surgeon may so become more autonomous; this may reduce communication errors with the assistants and take over difficult tasks such as holding tools without tremor. In this paper, we therefore examine the possibility to control a third robotic hand with one foot's movements. Three experiments in virtual reality were designed to assess the feasibility of this control strategy, the learning curve of the subjects in different tasks and the coordination of foot movements with the two natural hands. Results show that the limbs are moved simultaneously, in parallel rather than serially. Participants' performance improved within a few minutes of practice without any specific difficulty to complete the tasks. Subjective assessment by the subjects indicated that controlling a third hand by foot has been easy and required only negligible physical and mental efforts. The sense of ownership was reported to improve through the experiments. The mental burden was not directly related to the level of motion required by a task, but depended on the type of activity and practice. The most difficult task was moving two hands and foot in opposite directions. These results suggest that a combination of practice and appropriate tasks can enhance the learning process for controlling a robotic hand by foot.

  6. Control of a Supernumerary Robotic Hand by Foot: An Experimental Study in Virtual Reality.

    Directory of Open Access Journals (Sweden)

    Elahe Abdi

    Full Text Available In the operational theater, the surgical team could highly benefit from a robotic supplementary hand under the surgeon's full control. The surgeon may so become more autonomous; this may reduce communication errors with the assistants and take over difficult tasks such as holding tools without tremor. In this paper, we therefore examine the possibility to control a third robotic hand with one foot's movements. Three experiments in virtual reality were designed to assess the feasibility of this control strategy, the learning curve of the subjects in different tasks and the coordination of foot movements with the two natural hands. Results show that the limbs are moved simultaneously, in parallel rather than serially. Participants' performance improved within a few minutes of practice without any specific difficulty to complete the tasks. Subjective assessment by the subjects indicated that controlling a third hand by foot has been easy and required only negligible physical and mental efforts. The sense of ownership was reported to improve through the experiments. The mental burden was not directly related to the level of motion required by a task, but depended on the type of activity and practice. The most difficult task was moving two hands and foot in opposite directions. These results suggest that a combination of practice and appropriate tasks can enhance the learning process for controlling a robotic hand by foot.

  7. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones.

    Science.gov (United States)

    Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E

    2016-08-20

    As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user's daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.

  8. Constraint Study for a Hand Exoskeleton: Human Hand Kinematics and Dynamics

    Directory of Open Access Journals (Sweden)

    Fai Chen Chen

    2013-01-01

    Full Text Available In the last few years, the number of projects studying the human hand from the robotic point of view has increased rapidly, due to the growing interest in academic and industrial applications. Nevertheless, the complexity of the human hand given its large number of degrees of freedom (DoF within a significantly reduced space requires an exhaustive analysis, before proposing any applications. The aim of this paper is to provide a complete summary of the kinematic and dynamic characteristics of the human hand as a preliminary step towards the development of hand devices such as prosthetic/robotic hands and exoskeletons imitating the human hand shape and functionality. A collection of data and constraints relevant to hand movements is presented, and the direct and inverse kinematics are solved for all the fingers as well as the dynamics; anthropometric data and dynamics equations allow performing simulations to understand the behavior of the finger.

  9. Framework for Developing a Multimodal Programming Interface Used on Industrial Robots

    Directory of Open Access Journals (Sweden)

    Bogdan Mocan

    2014-12-01

    Full Text Available The proposed approach within this paper shifts the focus from the coordinate based programming of an industrial robot, which currently dominates the field, to an object based programming scheme. The general framework proposed in this paper is designed to perform natural language understanding, gesture integration and semantic analysis which facilitate the development of a multimodal robot programming interface that facilitate an intuitive programming.

  10. Space suit glove design with advanced metacarpal phalangeal joints and robotic hand evaluation.

    Science.gov (United States)

    Southern, Theodore; Roberts, Dustyn P; Moiseev, Nikolay; Ross, Amy; Kim, Joo H

    2013-06-01

    One area of space suits that is ripe for innovation is the glove. Existing models allow for some fine motor control, but the power grip--the act of grasping a bar--is cumbersome due to high torque requirements at the knuckle or metacarpal phalangeal joint (MCP). This area in particular is also a major source of complaints of pain and injury as reported by astronauts. This paper explores a novel fabrication and patterning technique that allows for more freedom of movement and less pain at this crucial joint in the manned space suit glove. The improvements are evaluated through unmanned testing, manned testing while depressurized in a vacuum glove box, and pressurized testing with a robotic hand. MCP joint flex score improved from 6 to 6.75 (out of 10) in the final glove relative to the baseline glove, and torque required for flexion decreased an average of 17% across all fingers. Qualitative assessments during unpressurized and depressurized manned testing also indicated the final glove was more comfortable than the baseline glove. The quantitative results from both human subject questionnaires and robotic torque evaluation suggest that the final iteration of the glove design enables flexion at the MCP joint with less torque and more comfort than the baseline glove.

  11. Method of Grasping Control by Computing Internal and External Impedances for Two Robot Fingers, and Its Application to Admittance Control of a Robot Hand-Arm System

    Directory of Open Access Journals (Sweden)

    Jian Huang

    2015-08-01

    Full Text Available Impedance control is an important technology used in the grasping control of a robot hand. Numerous studies related to grasping algorithms have been reported in recent years, with the contact force between robot fingers and the object to be grasped being primarily discussed in most cases. Generally, a coupling effect occurs between the internal loop of the grasping operation and the external loop of the interaction with the environment when a multi-fingered robot hand is used to complete a contact task. Therefore, a robot hand cannot hold an object using a large external force to complete a wide range of tasks by applying the conventional method. In this paper, the coupling of the internal/external forces occurring in grasping operations using multiple fingers is analysed. Then, improved impedance control based on the previous method is proposed as an effective tool to solve the problem of grasping failure caused by single-finger contact. Furthermore, a method for applying the improved grasping algorithm to the admittance control of a robot hand-arm system is also proposed. The proposed method divides the impedance effect into the grasping control of the hand and the cooperative control of the arm, so that expanding the task space and increasing the flexibility of impedance adjustment can be achieved. Experiments were conducted to demonstrate the effectiveness of the proposed method.

  12. Robotic hand with modular extensions

    Science.gov (United States)

    Salisbury, Curt Michael; Quigley, Morgan

    2015-01-20

    A robotic device is described herein. The robotic device includes a frame that comprises a plurality of receiving regions that are configured to receive a respective plurality of modular robotic extensions. The modular robotic extensions are removably attachable to the frame at the respective receiving regions by way of respective mechanical fuses. Each mechanical fuse is configured to trip when a respective modular robotic extension experiences a predefined load condition, such that the respective modular robotic extension detaches from the frame when the load condition is met.

  13. Hands in space: gesture interaction with augmented-reality interfaces.

    Science.gov (United States)

    Billinghurst, Mark; Piumsomboon, Tham; Huidong Bai

    2014-01-01

    Researchers at the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) are investigating free-hand gestures for natural interaction with augmented-reality interfaces. They've applied the results to systems for desktop computers and mobile devices.

  14. Access to hands-on mathematics measurement activities using robots controlled via speech generating devices: three case studies.

    Science.gov (United States)

    Adams, Kim; Cook, Al

    2014-07-01

    To examine how using a robot controlled via a speech generating device (SGD) influences the ways students with physical and communication limitations can demonstrate their knowledge in math measurement activities. Three children with severe physical disabilities and complex communication needs used the robot and SGD system to perform four math measurement lessons in comparing, sorting and ordering objects. The performance of the participants was measured and the process of using the system was described in terms of manipulation and communication events. Stakeholder opinions were solicited regarding robot use. Robot use revealed some gaps in the procedural knowledge of the participants. Access to both the robot and SGD was shown to provide several benefits. Stakeholders thought the intervention was important and feasible for a classroom environment. The participants were able to participate actively in the hands-on and communicative measurement activities and thus meet the demands of current math instruction methods. Current mathematics pedagogy encourages doing hands-on activities while communicating about concepts. Adapted Lego robots enabled children with severe physical disabilities to perform hands-on length measurement activities. Controlling the robots from speech generating devices (SGD) enabled the children, who also had complex communication needs, to reflect and report on results during the activities. By using the robots combined with SGDs, children both exhibited their knowledge of and experienced the concepts of mathematical measurements.

  15. Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve

    Science.gov (United States)

    Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.

    2013-01-01

    This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.

  16. Versatile robotic interface to evaluate, enable and train locomotion and balance after neuromotor disorders.

    Science.gov (United States)

    Dominici, Nadia; Keller, Urs; Vallery, Heike; Friedli, Lucia; van den Brand, Rubia; Starkey, Michelle L; Musienko, Pavel; Riener, Robert; Courtine, Grégoire

    2012-07-01

    Central nervous system (CNS) disorders distinctly impair locomotor pattern generation and balance, but technical limitations prevent independent assessment and rehabilitation of these subfunctions. Here we introduce a versatile robotic interface to evaluate, enable and train pattern generation and balance independently during natural walking behaviors in rats. In evaluation mode, the robotic interface affords detailed assessments of pattern generation and dynamic equilibrium after spinal cord injury (SCI) and stroke. In enabling mode,the robot acts as a propulsive or postural neuroprosthesis that instantly promotes unexpected locomotor capacities including overground walking after complete SCI, stair climbing following partial SCI and precise paw placement shortly after stroke. In training mode, robot-enabled rehabilitation, epidural electrical stimulation and monoamine agonists reestablish weight-supported locomotion, coordinated steering and balance in rats with a paralyzing SCI. This new robotic technology and associated concepts have broad implications for both assessing and restoring motor functions after CNS disorders, both in animals and in humans.

  17. Human versus Robot: A Propensity-Matched Analysis of the Accuracy of Free Hand versus Robotic Guidance for Placement of S2 Alar-Iliac (S2AI) Screws.

    Science.gov (United States)

    Shillingford, Jamal N; Laratta, Joseph L; Park, Paul J; Lombardi, Joseph M; Tuchman, Alexander; Saifi, Comron S; Lehman, Ronald A; Lenke, Lawrence G

    2018-04-18

    Retrospective matched cohort analysis. To compare the accuracy of S2 alar-iliac (S2AI) screw placement by robotic guidance versus free hand technique. Spinopelvic fixation utilizing S2AI screws provides optimal fixation across the lumbosacral junction allowing for solid fusion, especially in long segment fusion constructs. Traditionally, S2AI screw placement has required fluoroscopic guidance for accurate screw placement. Herein, we present the first series comparing a free hand and robotic-guided technique for S2AI screw placement. Sixty-eight consecutive patients who underwent S2AI screw placement by either a free hand or robotic technique between 2015 and 2016 were reviewed. Propensity score-matching was utilized to control for preoperative characteristic imbalances. Screw position and accuracy were evaluated using three-dimensional manipulation of CT reconstructions from intraoperative O-arm imaging. A total of 51 patients (105 screws) were matched, 28 (59 screws) in the free hand group (FHG) and 23 (46 screws) in the robot group (RG). The mean age in the FHG and RG were 57.9[REPLACEMENT CHARACTER]± 14.6 years and 61.6[REPLACEMENT CHARACTER]± 12.0 years (P = 0.342), respectively. The average caudal angle in the sagittal plane was significantly larger in the RG (31.0[REPLACEMENT CHARACTER]± 10.0° vs. 25.7[REPLACEMENT CHARACTER]± 8.8°, P =[REPLACEMENT CHARACTER]0.005). There was no difference between the FHG and RG in the horizontal angle, measured in the axial plane using the posterior superior iliac spine (PSIS) as a reference (41.1[REPLACEMENT CHARACTER]± 8.1° vs. 42.8[REPLACEMENT CHARACTER]± 6.6°, P =[REPLACEMENT CHARACTER]0.225), or the S2AI to S1 screw angle (9.4[REPLACEMENT CHARACTER]± 7.0° vs. 11.3[REPLACEMENT CHARACTER]± 9.9°, P =[REPLACEMENT CHARACTER]0.256), respectively. There was no difference in the overall accuracy between FHG and RG (94.9% vs. 97.8%, P =[REPLACEMENT CHARACTER]0.630). Additionally, there

  18. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones

    Directory of Open Access Journals (Sweden)

    Hansong Guo

    2016-08-01

    Full Text Available As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user’s daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.

  19. Pressure Sensor: State of the Art, Design, and Application for Robotic Hand

    Directory of Open Access Journals (Sweden)

    Ahmed M. Almassri

    2015-01-01

    Full Text Available We survey the state of the art in a variety of force sensors for designing and application of robotic hand. Most of the force sensors are examined based on tactile sensing. For a decade, many papers have widely discussed various sensor technologies and transducer methods which are based on microelectromechanical system (MEMS and silicon used for improving the accuracy and performance measurement of tactile sensing capabilities especially for robotic hand applications. We found that transducers and materials such as piezoresistive and polymer, respectively, are used in order to improve the sensing sensitivity for grasping mechanisms in future. This predicted growth in such applications will explode into high risk tasks which requires very precise purposes. It shows considerable potential and significant levels of research attention.

  20. Cognitive Human-Machine Interface Applied in Remote Support for Industrial Robot Systems

    Directory of Open Access Journals (Sweden)

    Tomasz Kosicki

    2013-10-01

    Full Text Available An attempt is currently being made to widely introduce industrial robots to Small-Medium Enterprises (SMEs. Since the enterprises usually employ too small number of robot units to afford specialized departments for robot maintenance, they must be provided with inexpensive and immediate support remotely. This paper evaluates whether the support can be provided by means of Cognitive Info-communication – communication in which human cognitive capabilities are extended irrespectively of geographical distances. The evaluations are given with an aid of experimental system that consists of local and remote rooms, which are physically separated – a six-degree-of-freedom NACHI SH133-03 industrial robot is situated in the local room, while the operator, who supervises the robot by means of audio-visual Cognitive Human-Machine Interface, is situated in the remote room. The results of simple experiments show that Cognitive Info-communication is not only efficient mean to provide the support remotely, but is probably also a powerful tool to enhance interaction with any data-rich environment that require good conceptual understanding of system's state and careful attention management. Furthermore, the paper discusses data presentation and reduction methods for data-rich environments, as well as introduces the concepts of Naturally Acquired Data and Cognitive Human-Machine Interfaces.

  1. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    Science.gov (United States)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  2. A Kinect-Based Gesture Recognition Approach for a Natural Human Robot Interface

    Directory of Open Access Journals (Sweden)

    Grazia Cicirelli

    2015-03-01

    Full Text Available In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN classifiers are trained to recognize the different gestures. This work deals with different challenging tasks, such as the real-time implementation of a gesture recognition system and the temporal resolution of gestures. The HRI interface developed in this work includes three Kinect cameras placed at different locations in an indoor environment and an autonomous mobile robot that can be remotely controlled by one operator standing in front of one of the Kinects. Moreover, the system is supplied with a people re-identification module which guarantees that only one person at a time has control of the robot. The system's performance is first validated offline, and then online experiments are carried out, proving the real-time operation of the system as required by a HRI interface.

  3. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones †

    Science.gov (United States)

    Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E

    2016-01-01

    As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user’s daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy. PMID:27556461

  4. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  5. Haptic-based neurorehabilitation in poststroke patients: a feasibility prospective multicentre trial for robotics hand rehabilitation.

    Science.gov (United States)

    Turolla, Andrea; Daud Albasini, Omar A; Oboe, Roberto; Agostini, Michela; Tonin, Paolo; Paolucci, Stefano; Sandrini, Giorgio; Venneri, Annalena; Piron, Lamberto

    2013-01-01

    Background. Haptic robots allow the exploitation of known motor learning mechanisms, representing a valuable option for motor treatment after stroke. The aim of this feasibility multicentre study was to test the clinical efficacy of a haptic prototype, for the recovery of hand function after stroke. Methods. A prospective pilot clinical trial was planned on 15 consecutive patients enrolled in 3 rehabilitation centre in Italy. All the framework features of the haptic robot (e.g., control loop, external communication, and graphic rendering for virtual reality) were implemented into a real-time MATLAB/Simulink environment, controlling a five-bar linkage able to provide forces up to 20 [N] at the end effector, used for finger and hand rehabilitation therapies. Clinical (i.e., Fugl-Meyer upper extremity scale; nine hold pegboard test) and kinematics (i.e., time; velocity; jerk metric; normalized jerk of standard movements) outcomes were assessed before and after treatment to detect changes in patients' motor performance. Reorganization of cortical activation was detected in one patient by fMRI. Results and Conclusions. All patients showed significant improvements in both clinical and kinematic outcomes. Additionally, fMRI results suggest that the proposed approach may promote a better cortical activation in the brain.

  6. Haptic-Based Neurorehabilitation in Poststroke Patients: A Feasibility Prospective Multicentre Trial for Robotics Hand Rehabilitation

    Directory of Open Access Journals (Sweden)

    Andrea Turolla

    2013-01-01

    Full Text Available Background. Haptic robots allow the exploitation of known motorlearning mechanisms, representing a valuable option for motor treatment after stroke. The aim of this feasibility multicentre study was to test the clinical efficacy of a haptic prototype, for the recovery of hand function after stroke. Methods. A prospective pilot clinical trial was planned on 15 consecutive patients enrolled in 3 rehabilitation centre in Italy. All the framework features of the haptic robot (e.g., control loop, external communication, and graphic rendering for virtual reality were implemented into a real-time MATLAB/Simulink environment, controlling a five-bar linkage able to provide forces up to 20 [N] at the end effector, used for finger and hand rehabilitation therapies. Clinical (i.e., Fugl-Meyer upper extremity scale; nine hold pegboard test and kinematics (i.e., time; velocity; jerk metric; normalized jerk of standard movements outcomes were assessed before and after treatment to detect changes in patients' motor performance. Reorganization of cortical activation was detected in one patient by fMRI. Results and Conclusions. All patients showed significant improvements in both clinical and kinematic outcomes. Additionally, fMRI results suggest that the proposed approach may promote a better cortical activation in the brain.

  7. Brain Machine Interfaces for Robotic Control in Space Applications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR will study the application of a brain machine interface (BMI) to enable crew to remotely operate and monitor robots from inside a flight vehicle, habitat...

  8. Robotic finger perturbation training improves finger postural steadiness and hand dexterity.

    Science.gov (United States)

    Yoshitake, Yasuhide; Ikeda, Atsutoshi; Shinohara, Minoru

    2018-02-01

    The purpose of the study was to understand the effect of robotic finger perturbation training on steadiness in finger posture and hand dexterity in healthy young adults. A mobile robotic finger training system was designed to have the functions of high-speed mechanical response, two degrees of freedom, and adjustable loading amplitude and direction. Healthy young adults were assigned to one of the three groups: random perturbation training (RPT), constant force training (CFT), and control. Subjects in RPT and CFT performed steady posture training with their index finger using the robot in different modes: random force in RPT and constant force in CFT. After the 2-week intervention period, fluctuations of the index finger posture decreased only in RPT during steady position-matching tasks with an inertial load. Purdue pegboard test score improved also in RPT only. The relative change in finger postural fluctuations was negatively correlated with the relative change in the number of completed pegs in the pegboard test in RPT. The results indicate that finger posture training with random mechanical perturbations of varying amplitudes and directions of force is effective in improving finger postural steadiness and hand dexterity in healthy young adults. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. iSpike: a spiking neural interface for the iCub robot

    International Nuclear Information System (INIS)

    Gamez, D; Fidjeland, A K; Lazdins, E

    2012-01-01

    This paper presents iSpike: a C++ library that interfaces between spiking neural network simulators and the iCub humanoid robot. It uses a biologically inspired approach to convert the robot’s sensory information into spikes that are passed to the neural network simulator, and it decodes output spikes from the network into motor signals that are sent to control the robot. Applications of iSpike range from embodied models of the brain to the development of intelligent robots using biologically inspired spiking neural networks. iSpike is an open source library that is available for free download under the terms of the GPL. (paper)

  10. Java interface for asserting interactive telerobotic control

    Science.gov (United States)

    DePasquale, Peter; Lewis, John; Stein, Matthew R.

    1997-12-01

    Many current web-based telerobotic interfaces use HyperText Markup Language (HTML) forms to assert user control on a robot. While acceptable for some tasks, a Java interface can provide better client-server interaction. The Puma Paint project is a joint effort between the Department of Computing Sciences at Villanova University and the Department of Mechanical and Materials Engineering at Wilkes University. THe project utilizes a Java applet to control a Unimation Puma 1760 robot during the task of painting on a canvas. The interface allows the user to control the paint strokes as well as the pressure of a brush on the canvas and how deep the brush is dipped into a paint jar. To provide immediate feedback, a virtual canvas models the effects of the controls as the artist paints. Live color video feedback is provided, allowing the user to view the actual results of the robot's motions. Unlike the step-at-a-time model of many web forms, the application permits the user to assert interactive control. The greater the complexity of the interaction between the robot and its environment, the greater the need for high quality information presentation to the user. The use of Java allows the sophistication of the user interface to be raised to the level required for satisfactory control. This paper describes the Puma Paint project, including the interface and communications model. It also examines the challenges of using the Internet as the medium of communications and the challenges of encoding free ranging motions for transmission from the client to the robot.

  11. A proposal of decontamination robot using 3D hand-eye-dual-cameras solid recognition and accuracy validation

    International Nuclear Information System (INIS)

    Minami, Mamoru; Nishimura, Kenta; Sunami, Yusuke; Yanou, Akira; Yu, Cui; Yamashita, Manabu; Ishiyama, Shintaro

    2015-01-01

    New robotic system that uses three dimensional measurement with solid object recognition —3D-MOS (Three Dimensional Move on Sensing)— based on visual servoing technology was designed and the on-board hand-eye-dual-cameras robot system has been developed to reduce risks of radiation exposure during decontamination processes by filter press machine that solidifies and reduces the volume of irradiation contaminated soil. The feature of 3D-MoS includes; (1) the both hand-eye-dual-cameras take the images of target object near the intersection of both lenses' centerlines, (2) the observation at intersection enables both cameras can see target object almost at the center of both images, (3) then it brings benefits as reducing the effect of lens aberration and improving the detection accuracy of three dimensional position. In this study, accuracy validation test of interdigitation of the robot's hand into filter cloth rod of the filter press —the task is crucial for the robot to remove the contaminated cloth from the filter press machine automatically and for preventing workers from exposing to radiation—, was performed. Then the following results were derived; (1) the 3D-MoS controlled robot could recognize the rod at arbitrary position within designated space, and all of insertion test were carried out successfully and, (2) test results also demonstrated that the proposed control guarantees that interdigitation clearance between the rod and robot hand can be kept within 1.875[mm] with standard deviation being 0.6[mm] or less. (author)

  12. Mentoring console improves collaboration and teaching in surgical robotics.

    Science.gov (United States)

    Hanly, Eric J; Miller, Brian E; Kumar, Rajesh; Hasser, Christopher J; Coste-Maniere, Eve; Talamini, Mark A; Aurora, Alexander A; Schenkman, Noah S; Marohn, Michael R

    2006-10-01

    One of the most significant limitations of surgical robots has been their inability to allow multiple surgeons and surgeons-in-training to engage in collaborative control of robotic surgical instruments. We report the initial experience with a novel two-headed da Vinci surgical robot that has two collaborative modes: the "swap" mode allows two surgeons to simultaneously operate and actively swap control of the robot's four arms, and the "nudge" mode allows them to share control of two of the robot's arms. The utility of the mentoring console operating in its two collaborative modes was evaluated through a combination of dry laboratory exercises and animal laboratory surgery. The results from surgeon-resident collaborative performance of complex three-handed surgical tasks were compared to results from single-surgeon and single-resident performance. Statistical significance was determined using Student's t-test. Collaborative surgeon-resident swap control reduced the time to completion of complex three-handed surgical tasks by 25% compared to single-surgeon operation of a four-armed da Vinci (P nudge mode was particularly useful for guiding a resident's hands during crucially precise steps of an operation (such as proper placement of stitches). The da Vinci mentoring console greatly facilitates surgeon collaboration during robotic surgery and improves the performance of complex surgical tasks. The mentoring console has the potential to improve resident participation in surgical robotics cases, enhance resident education in surgical training programs engaged in surgical robotics, and improve patient safety during robotic surgery.

  13. Distributed analysis functional testing using GangaRobot in the ATLAS experiment

    Science.gov (United States)

    Legger, Federica; ATLAS Collaboration

    2011-12-01

    Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.

  14. Design for a three-fingered hand. [robotic and prosthetic applications

    Science.gov (United States)

    Crossley, F. R. E.

    1977-01-01

    This paper describes the construction of a prototype mechanical hand or 'end effector' for use on a remotely controlled robot, but with possible application as a prosthetic device. An analysis of hand motions is reported, from which it is concluded that the two most important manipulations (apart from grasps) are to be able to pick up a tool and draw it into a nested grip against the palm, and to be able to hold a pistol-grip tool such as an electric drill and pull the trigger. One of our models was tested and found capable of both these operations.

  15. A bio-inspired design of a hand robotic exoskeleton for rehabilitation

    Science.gov (United States)

    Ong, Aira Patrice R.; Bugtai, Nilo T.

    2018-02-01

    This paper presents the methodology for the design of a five-degree of freedom wearable robotic exoskeleton for hand rehabilitation. The design is inspired by the biological structure and mechanism of the human hand. One of the distinct features of the device is the cable-driven actuation, which provides the flexion and extension motion. A prototype of the orthotic device has been developed to prove the model of the system and has been tested in a 3D printed mechanical hand. The result showed that the proposed device was consistent with the requirements of bionics and was able to demonstrate the flexion and extension of the system.

  16. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  17. Robot services for elderly with cognitive impairment: testing usability of graphical user interfaces.

    Science.gov (United States)

    Granata, C; Pino, M; Legouverneur, G; Vidal, J-S; Bidaud, P; Rigaud, A-S

    2013-01-01

    Socially assistive robotics for elderly care is a growing field. However, although robotics has the potential to support elderly in daily tasks by offering specific services, the development of usable interfaces is still a challenge. Since several factors such as age or disease-related changes in perceptual or cognitive abilities and familiarity with computer technologies influence technology use they must be considered when designing interfaces for these users. This paper presents findings from usability testing of two different services provided by a social assistive robot intended for elderly with cognitive impairment: a grocery shopping list and an agenda application. The main goal of this study is to identify the usability problems of the robot interface for target end-users as well as to isolate the human factors that affect the use of the technology by elderly. Socio-demographic characteristics and computer experience were examined as factors that could have an influence on task performance. A group of 11 elderly persons with Mild Cognitive Impairment and a group of 11 cognitively healthy elderly individuals took part in this study. Performance measures (task completion time and number of errors) were collected. Cognitive profile, age and computer experience were found to impact task performance. Participants with cognitive impairment achieved the tasks committing more errors than cognitively healthy elderly. Instead younger participants and those with previous computer experience were faster at completing the tasks confirming previous findings in the literature. The overall results suggested that interfaces and contents of the services assessed were usable by older adults with cognitive impairment. However, some usability problems were identified and should be addressed to better meet the needs and capacities of target end-users.

  18. Space station automation and robotics study. Operator-systems interface

    Science.gov (United States)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  19. Design of Piano -playing Robotic Hand

    OpenAIRE

    Lin Jen-Chang; Hsin-Cheng Li; Kuo-Cheng Huang; Shu-Wei Lin

    2013-01-01

    Unlike the market slowdown of industrial robots, service & entertainment robots have been highly regarded by most robotics reseach and market research agencies. In this study we developed a music playing robot (which can also work as a service robot) for public performance. The research is mainly focused on the mechanical and electrical control of piano-playing robot, the exploration of correlations among music theory, rhythm and piano keys, and eventually the research on playing skill of...

  20. Electromyographic Grasp Recognition for a Five Fingered Robotic Hand

    Directory of Open Access Journals (Sweden)

    Nayan M. Kakoty

    2012-09-01

    Full Text Available This paper presents classification of grasp types based on surface electromyographic signals. Classification is through radial basis function kernel support vector machine using sum of wavelet decomposition coefficients of the EMG signals. In a study involving six subjects, we achieved an average recognition rate of 86%. The electromyographic grasp recognition together with a 8-bit microcontroller has been employed to control a fivefingered robotic hand to emulate six grasp types used during 70% daily living activities.

  1. A Methodology for the Design of Robotic Hands with Multiple Fingers

    Directory of Open Access Journals (Sweden)

    Jorge Eduardo Parada Puig

    2008-11-01

    Full Text Available This paper presents a methodology that has been applied for a design process of anthropomorphic hands with multiple fingers. Biomechanical characteristics of human hand have been analysed so that ergonomic and anthropometric aspects have been used as fundamental references for obtaining grasping mechanisms. A kinematic analysis has been proposed to define the requirements for designing grasping functions. Selection of materials and actuators has been discussed too. This topic has been based on previous experiences with prototypes that have been developed at the Laboratory of Robotics and Mechatronics (LARM of the University of Cassino. An example of the application of the proposed method has been presented for the design of a first prototype of LARM Hand.

  2. Design and Development of a Hand Exoskeleton Robot for Active and Passive Rehabilitation

    Directory of Open Access Journals (Sweden)

    Oscar Sandoval-Gonzalez

    2016-04-01

    Full Text Available The present work, which describes the mechatronic design and development of a novel rehabilitation robotic exoskeleton hand, aims to present a solution for neuromusculoskeletal rehabilitation. It presents a full range of motion for all hand phalanges and was specifically designed to carry out position and force-position control for passive and active rehabilitation routines. System integration and preliminary clinical tests are also presented.

  3. Robotic microlaryngeal phonosurgery: Testing of a "steady-hand" microsurgery platform.

    Science.gov (United States)

    Akst, Lee M; Olds, Kevin C; Balicki, Marcin; Chalasani, Preetham; Taylor, Russell H

    2018-01-01

    To evaluate gains in microlaryngeal precision achieved by using a novel robotic "steady hand" microsurgery platform in performing simulated phonosurgical tasks. Crossover comparative study of surgical performance and descriptive analysis of surgeon feedback. A novel robotic ear, nose, and throat microsurgery system (REMS) was tested in simulated phonosurgery. Participants navigated a 0.4-mm-wide microlaryngeal needle through spirals of varying widths, both with and without robotic assistance. Fail time (time the needle contacted spiral edges) was measured, and statistical comparison was performed. Participants were surveyed to provide subjective feedback on the REMS. Nine participants performed the task at three spiral widths, yielding 27 paired testing conditions. In 24 of 27 conditions, robot-assisted performance was better than unassisted; five trials were errorless, all achieved with the robot. Paired analysis of all conditions revealed fail time of 0.769 ± 0.568 seconds manually, improving to 0.284 ± 0.584 seconds with the robot (P = .003). Analysis of individual spiral sizes showed statistically better performance with the REMS at spiral widths of 2 mm (0.156 ± 0.226 seconds vs. 0.549 ± 0.545 seconds, P = .019) and 1.5 mm (0.075 ± 0.099 seconds vs. 0.890 ± 0.518 seconds, P = .002). At 1.2 mm, all nine participants together showed similar performance with and without robotic assistance (0.621 ± 0.923 seconds vs. 0.868 ± 0.634 seconds, P = .52), though subgroup analysis of five surgeons most familiar with microlaryngoscopy showed statistically better performance with the robot (0.204 ± 0.164 seconds vs. 0.664 ± 0.354 seconds, P = .036). The REMS is a novel platform with potential applications in microlaryngeal phonosurgery. Further feasibility studies and preclinical testing should be pursued as a bridge to eventual clinical use. NA. Laryngoscope, 128:126-132, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  4. "I Want My Robot to Look for Food": Comparing Kindergartner's Programming Comprehension Using Tangible, Graphic, and Hybrid User Interfaces

    Science.gov (United States)

    Strawhacker, Amanda; Bers, Marina U.

    2015-01-01

    In recent years, educational robotics has become an increasingly popular research area. However, limited studies have focused on differentiated learning outcomes based on type of programming interface. This study aims to explore how successfully young children master foundational programming concepts based on the robotics user interface (tangible,…

  5. Modeling and Design of an Electro-Rheological Fluid Based Haptic System for Tele-Operation of Space Robots

    Science.gov (United States)

    Mavroidis, Constantinos; Pfeiffer, Charles; Paljic, Alex; Celestino, James; Lennon, Jamie; Bar-Cohen, Yoseph

    2000-01-01

    For many years, the robotic community sought to develop robots that can eventually operate autonomously and eliminate the need for human operators. However, there is an increasing realization that there are some tasks that human can perform significantly better but, due to associated hazards, distance, physical limitations and other causes, only robot can be employed to perform these tasks. Remotely performing these types of tasks requires operating robots as human surrogates. While current "hand master" haptic systems are able to reproduce the feeling of rigid objects, they present great difficulties in emulating the feeling of remote/virtual stiffness. In addition, they tend to be heavy, cumbersome and usually they only allow limited operator workspace. In this paper a novel haptic interface is presented to enable human-operators to "feel" and intuitively mirror the stiffness/forces at remote/virtual sites enabling control of robots as human-surrogates. This haptic interface is intended to provide human operators intuitive feeling of the stiffness and forces at remote or virtual sites in support of space robots performing dexterous manipulation tasks (such as operating a wrench or a drill). Remote applications are referred to the control of actual robots whereas virtual applications are referred to simulated operations. The developed haptic interface will be applicable to IVA operated robotic EVA tasks to enhance human performance, extend crew capability and assure crew safety. The electrically controlled stiffness is obtained using constrained ElectroRheological Fluids (ERF), which changes its viscosity under electrical stimulation. Forces applied at the robot end-effector due to a compliant environment will be reflected to the user using this ERF device where a change in the system viscosity will occur proportionally to the force to be transmitted. In this paper, we will present the results of our modeling, simulation, and initial testing of such an

  6. Gesture Commanding of a Robot with EVA Gloves

    Data.gov (United States)

    National Aeronautics and Space Administration — Gestures commands allow a human operator to directly interact with a robot without the use of intermediary hand controllers. There are two main types of hand gesture...

  7. Brain Computer Interface for Micro-controller Driven Robot Based on Emotiv Sensors

    Directory of Open Access Journals (Sweden)

    Parth Gargava

    2017-08-01

    Full Text Available A Brain Computer Interface (BCI is developed to navigate a micro-controller based robot using Emotiv sensors. The BCI system has a pipeline of 5 stages- signal acquisition, pre-processing, feature extraction, classification and CUDA inter- facing. It shall aid in serving a prototype for physical movement of neurological patients who are unable to control or operate on their muscular movements. All stages of the pipeline are designed to process bodily actions like eye blinks to command navigation of the robot. This prototype works on features learning and classification centric techniques using support vector machine. The suggested pipeline, ensures successful navigation of a robot in four directions in real time with accuracy of 93 percent.

  8. In Silico Investigation of a Surgical Interface for Remote Control of Modular Miniature Robots in Minimally Invasive Surgery

    Directory of Open Access Journals (Sweden)

    Apollon Zygomalas

    2014-01-01

    Full Text Available Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES and laparoendoscopic single site (LESS surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures.

  9. A novel computerized surgeon-machine interface for robot-assisted laser phonomicrosurgery.

    Science.gov (United States)

    Mattos, Leonardo S; Deshpande, Nikhil; Barresi, Giacinto; Guastini, Luca; Peretti, Giorgio

    2014-08-01

    To introduce a novel computerized surgical system for improved usability, intuitiveness, accuracy, and controllability in robot-assisted laser phonomicrosurgery. Pilot technology assessment. The novel system was developed involving a newly designed motorized laser micromanipulator, a touch-screen display, and a graphics stylus. The system allows the control of a CO2 laser through interaction between the stylus and the live video of the surgical area. This empowers the stylus with the ability to have actual effect on the surgical site. Surgical enhancements afforded by this system were established through a pilot technology assessment using randomized trials comparing its performance with a state-of-the-art laser microsurgery system. Resident surgeons and medical students were chosen as subjects in performing sets of trajectory-following exercises. Image processing-based techniques were used for an objective performance assessment. A System Usability Scale-based questionnaire was used for the qualitative assessment. The computerized interface demonstrated superiority in usability, accuracy, and controllability over the state-of-the-art system. Significant ease of use and learning experienced by the subjects were demonstrated by the usability score assigned to the two compared interfaces: computerized interface = 83.96% versus state-of-the-art = 68.02%. The objective analysis showed a significant enhancement in accuracy and controllability: computerized interface = 90.02% versus state-of-the-art = 75.59%. The novel system significantly enhances the accuracy, usability, and controllability in laser phonomicrosurgery. The design provides an opportunity to improve the ergonomics and safety of current surgical setups. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  10. Monitor, the prelude to robotics

    International Nuclear Information System (INIS)

    Grisham, D.L.; Lambert, J.E.

    1985-01-01

    Robots and teleoperator systems will play an important role in future energy systems regardless of the particular energy source. Present remote handling systems were developed for radioactive environments; however, future sources, such as fusion reactors, solar concentrators, and wind generators will also produce environments too hostile for practical ''hands on'' maintenance. Teleoperator systems developed at the Clinton P. Anderson Meson Physics Facility (LAMPF) are a logical prelude to performing remote operations with robots. The ''Monitor'' remote handling systems represented state-of-the-art mechanical hardware and operating techniques - the only elements missing are suitable computer and software interfaces

  11. Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing

    Directory of Open Access Journals (Sweden)

    Fei Hui

    2017-03-01

    Full Text Available Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object.

  12. Individual finger synchronized robot-assisted hand rehabilitation in subacute to chronic stroke: a prospective randomized clinical trial of efficacy.

    Science.gov (United States)

    Hwang, Chang Ho; Seong, Jin Wan; Son, Dae-Sik

    2012-08-01

    To evaluate individual finger synchronized robot-assisted hand rehabilitation in stroke patients. Prospective parallel group randomized controlled clinical trial. The study recruited patients who were ≥18 years old, more than three months post stroke, showed limited index finger movement and had weakened and impaired hand function. Patients with severe sensory loss, spasticity, apraxia, aphasia, disabling hand disease, impaired consciousness or depression were excluded. Patients received either four weeks (20 sessions) of active robot-assisted intervention (the FTI (full-term intervention) group, 9 patients) or two weeks (10 sessions) of early passive therapy followed by two weeks (10 sessions) of active robot-assisted intervention (the HTI (half-term intervention) group, 8 patients). Patients underwent arm function assessments prior to therapy (baseline), and at 2, 4 and 8 weeks after starting therapy. Compared to baseline, both the FTI and HTI groups showed improved results for the Jebsen Taylor test, the wrist and hand subportion of the Fugl-Meyer arm motor scale, active movement of the 2nd metacarpophalangeal joint, grasping, and pinching power (P vs. 46.4 ± 37.4) and wrist and hand subportion of the Fugl-Meyer arm motor scale (4.3 ± 1.9 vs. 3.4 ± 2.5) after eight weeks. A four-week rehabilitation using a novel robot that provides individual finger synchronization resulted in a dose-dependent improvement in hand function in subacute to chronic stroke patients.

  13. Working hard to make a simple definition of synergies. Comment on: "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands" by Marco Santello et al.

    Science.gov (United States)

    Alessandro, Cristiano; Oliveira Barroso, Filipe; Tresch, Matthew

    2016-07-01

    The paper ;Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands; [1] presents a comprehensive review of the work carried out as part of the EU funded project ;The Hand Embodied;. The work uses the concept of ;synergy; to study the neuromuscular control of the human hand and to design novel robotics systems. The project has been very productive and has made important contributions. We are therefore confident that it will lead to further advancements and experiments in the future.

  14. Robotic surgery in gynecology

    Directory of Open Access Journals (Sweden)

    Jean eBouquet De Jolinière

    2016-05-01

    Full Text Available Abstract Minimally invasive surgery (MIS can be considered as the greatest surgical innovation over the past thirty years. It revolutionized surgical practice with well-proven advantages over traditional open surgery: reduced surgical trauma and incision-related complications, such as surgical-site infections, postoperative pain and hernia, reduced hospital stay, and improved cosmetic outcome. Nonetheless, proficiency in MIS can be technically challenging as conventional laparoscopy is associated with several limitations as the two-dimensional (2D monitor reduction in-depth perception, camera instability, limited range of motion and steep learning curves. The surgeon has a low force feedback which allows simple gestures, respect for tissues and more effective treatment of complications.Since 1980s several computer sciences and robotics projects have been set up to overcome the difficulties encountered with conventional laparoscopy, to augment the surgeon's skills, achieve accuracy and high precision during complex surgery and facilitate widespread of MIS. Surgical instruments are guided by haptic interfaces that replicate and filter hand movements. Robotically assisted technology offers advantages that include improved three- dimensional stereoscopic vision, wristed instruments that improve dexterity, and tremor canceling software that improves surgical precision.

  15. Brain-Computer Interface-based robotic end effector system for wrist and hand rehabilitation: results of a three-armed randomized controlled trial for chronic stroke

    Directory of Open Access Journals (Sweden)

    Kai Keng eAng

    2014-07-01

    Full Text Available The objective of this study was to investigate the efficacy of an Electroencephalography (EEG-based Motor Imagery (MI Brain-Computer Interface (BCI coupled with a Haptic Knob (HK robot for arm rehabilitation in stroke patients. In this three-arm, single-blind, randomized controlled trial; 21 chronic hemiplegic stroke patients (Fugl-Meyer Motor Assessment (FMMA score 10-50, recruited after pre-screening for MI BCI ability, were randomly allocated to BCI-HK, HK or Standard Arm Therapy (SAT groups. All groups received 18 sessions of intervention over 6 weeks, 3 sessions per week, 90 minutes per session. The BCI-HK group received 1 hour of BCI coupled with HK intervention, and the HK group received 1 hour of HK intervention per session. Both BCI-HK and HK groups received 120 trials of robot-assisted hand grasping and knob manipulation followed by 30 minutes of therapist-assisted arm mobilization. The SAT group received 1.5 hours of therapist-assisted arm mobilization and forearm pronation-supination movements incorporating wrist control and grasp-release functions. In all, 14 males, 7 females, mean age 54.2 years, mean stroke duration 385.1 days, with baseline FMMA score 27.0 were recruited. The primary outcome measure was upper-extremity FMMA scores measured mid-intervention at week 3, end-intervention at week 6, and follow-up at weeks 12 and 24. Seven, 8 and 7 subjects underwent BCI-HK, HK and SAT interventions respectively. FMMA score improved in all groups, but no intergroup differences were found at any time points. Significantly larger motor gains were observed in the BCI-HK group compared to the SAT group at weeks 3, 12 and 24, but motor gains in the HK group did not differ from the SAT group at any time point. In conclusion, BCI-HK is effective, safe, and may have the potential for enhancing motor recovery in chronic stroke when combined with therapist-assisted arm mobilization.

  16. Brain-computer interface-based robotic end effector system for wrist and hand rehabilitation: results of a three-armed randomized controlled trial for chronic stroke.

    Science.gov (United States)

    Ang, Kai Keng; Guan, Cuntai; Phua, Kok Soon; Wang, Chuanchu; Zhou, Longjiang; Tang, Ka Yin; Ephraim Joseph, Gopal J; Kuah, Christopher Wee Keong; Chua, Karen Sui Geok

    2014-01-01

    The objective of this study was to investigate the efficacy of an Electroencephalography (EEG)-based Motor Imagery (MI) Brain-Computer Interface (BCI) coupled with a Haptic Knob (HK) robot for arm rehabilitation in stroke patients. In this three-arm, single-blind, randomized controlled trial; 21 chronic hemiplegic stroke patients (Fugl-Meyer Motor Assessment (FMMA) score 10-50), recruited after pre-screening for MI BCI ability, were randomly allocated to BCI-HK, HK or Standard Arm Therapy (SAT) groups. All groups received 18 sessions of intervention over 6 weeks, 3 sessions per week, 90 min per session. The BCI-HK group received 1 h of BCI coupled with HK intervention, and the HK group received 1 h of HK intervention per session. Both BCI-HK and HK groups received 120 trials of robot-assisted hand grasping and knob manipulation followed by 30 min of therapist-assisted arm mobilization. The SAT group received 1.5 h of therapist-assisted arm mobilization and forearm pronation-supination movements incorporating wrist control and grasp-release functions. In all, 14 males, 7 females, mean age 54.2 years, mean stroke duration 385.1 days, with baseline FMMA score 27.0 were recruited. The primary outcome measure was upper extremity FMMA scores measured mid-intervention at week 3, end-intervention at week 6, and follow-up at weeks 12 and 24. Seven, 8 and 7 subjects underwent BCI-HK, HK and SAT interventions respectively. FMMA score improved in all groups, but no intergroup differences were found at any time points. Significantly larger motor gains were observed in the BCI-HK group compared to the SAT group at weeks 3, 12, and 24, but motor gains in the HK group did not differ from the SAT group at any time point. In conclusion, BCI-HK is effective, safe, and may have the potential for enhancing motor recovery in chronic stroke when combined with therapist-assisted arm mobilization.

  17. Advanced robotics for medical rehabilitation current state of the art and recent advances

    CERN Document Server

    Xie, Shane

    2016-01-01

    Focussing on the key technologies in developing robots for a wide range of medical rehabilitation activities – which will include robotics basics, modelling and control, biomechanics modelling, rehabilitation strategies, robot assistance, clinical setup/implementation as well as neural and muscular interfaces for rehabilitation robot control – this book is split into two parts; a review of the current state of the art, and recent advances in robotics for medical rehabilitation. Both parts will include five sections for the five key areas in rehabilitation robotics: (i) the upper limb; (ii) lower limb for gait rehabilitation (iii) hand, finger and wrist; (iv) ankle for strains and sprains; and (v) the use of EEG and EMG to create interfaces between the neurological and muscular functions of the patients and the rehabilitation robots. Each chapter provides a description of the design of the device, the control system used, and the implementation and testing to show how it fulfils the needs of that specific ...

  18. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    Science.gov (United States)

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  19. Controlling the autonomy of a reconnaissance robot

    Science.gov (United States)

    Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David

    2004-09-01

    In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.

  20. Humans can integrate force feedback to toes in their sensorimotor control of a robotic hand.

    Science.gov (United States)

    Panarese, Alessandro; Edin, Benoni B; Vecchi, Fabrizio; Carrozza, Maria C; Johansson, Roland S

    2009-12-01

    Tactile sensory feedback is essential for dexterous object manipulation. Users of hand myoelectric prostheses without tactile feedback must depend essentially on vision to control their device. Indeed, improved tactile feedback is one of their main priorities. Previous research has provided evidence that conveying tactile feedback can improve prostheses control, although additional effort is required to solve problems related to pattern recognition learning, unpleasant sensations, sensory adaptation, and low spatiotemporal resolution. Still, these studies have mainly focused on providing stimulation to hairy skin regions close to the amputation site, i.e., usually to the upper arm. Here, we explored the possibility to provide tactile feedback to the glabrous skin of toes, which have mechanical and neurophysiological properties similar to the fingertips. We explored this paradigm in a grasp-and-lift task, in which healthy participants controlled two opposing digits of a robotic hand by changing the spacing of their index finger and thumb. The normal forces applied by the robotic fingertips to a test object were fed back to the right big and second toe. We show that within a few lifting trials, all the participants incorporated the force feedback received by the foot in their sensorimotor control of the robotic hand.

  1. Design and implementation of a dexterous anthropomorphic robotic typing (DART) hand

    International Nuclear Information System (INIS)

    Thayer, Nicholas; Priya, Shashank

    2011-01-01

    This paper focuses on design and implementation of a biomimetic dexterous humanoid hand. Several design rules are proposed to retain human form and functionality in a robotic hand while overcoming the difficultly of actuation within a confined geometry. Size and weight have been optimized in order to achieve human-like performance with the prime objective of typing on a computer keyboard. Each finger has four joints and three degrees of freedom (DOF) while the thumb has an additional degree of freedom necessary for manipulating small objects. The hand consists of 16 servo motors dedicated to finger motion and three motors for wrist motion. A closed-loop kinematic control scheme utilizing the Denavit–Hartenberg convention for spatial joint positioning was implemented. Servo motors housed in the forearm act as an origin for wires to travel to their insertion points in the hand. The dexterity of the DART hand was measured by quantifying functionality and typing speed on a standard keyboard. The typing speed of a single DART hand was found to be 20 words min −1 . In comparison, the average human has a typing speed of 33 words min −1 with two hands

  2. Robotic Eye-in-hand Calibration in an Uncalibrated Environment

    Directory of Open Access Journals (Sweden)

    Sebastian Van Delden

    2008-12-01

    Full Text Available The optical flow of high interest points in images of an uncalibrated scene is used to recover the camera orientation of an eye-in-hand robotic manipulator. The system is completely automated, iteratively performing a sequence of rotations and translations until the camera frame is aligned with the manipulator's world frame. The manipulator must be able to translate and rotate its end-effector with respect to its world frame. The system is implemented and being tested on a Stäubli RX60 manipulator using an off-the-shelf Logitech USB camera.

  3. Control Capabilities of Myoelectric Robotic Prostheses by Hand Amputees: A Scientific Research and Market Overview.

    Science.gov (United States)

    Atzori, Manfredo; Müller, Henning

    2015-01-01

    Hand amputation can dramatically affect the capabilities of a person. Cortical reorganization occurs in the brain, but the motor and somatosensorial cortex can interact with the remnant muscles of the missing hand even many years after the amputation, leading to the possibility to restore the capabilities of hand amputees through myoelectric prostheses. Myoelectric hand prostheses with many degrees of freedom are commercially available and recent advances in rehabilitation robotics suggest that their natural control can be performed in real life. The first commercial products exploiting pattern recognition to recognize the movements have recently been released, however the most common control systems are still usually unnatural and must be learned through long training. Dexterous and naturally controlled robotic prostheses can become reality in the everyday life of amputees but the path still requires many steps. This mini-review aims to improve the situation by giving an overview of the advancements in the commercial and scientific domains in order to outline the current and future chances in this field and to foster the integration between market and scientific research.

  4. Control Capabilities of Myoelectric Robotic Prostheses by Hand Amputees: A Scientific Research and Market Overview

    Directory of Open Access Journals (Sweden)

    Manfredo eAtzori

    2015-11-01

    Full Text Available Hand amputation can dramatically affect the capabilities of a person. Cortical reorganization occurs in the brain, but the motor and somatosensorial cortex can interact with the remnant muscles of the missing hand even many years after the amputation, leading to the possibility to restore the capabilities of hand amputees through myoelectric prostheses. Myoelectric hand prostheses with many degrees of freedom are commercially available and recent advances in rehabilitation robotics suggest that their natural control can be performed in real life. The first commercial products exploiting pattern recognition to recognize the movements have recently been released, however the most common control systems are still usually unnatural and must be learned through long training. Dexterous and naturally controlled robotic prostheses can become reality in the everyday life of amputees but the path still requires many steps. This mini-review aims to improve the situation by giving an overview of the advancements in the commercial and scientific domains in order to outline the current and future chances in this field and to foster the integration between market and scientific research.

  5. The SmartHand transradial prosthesis

    Directory of Open Access Journals (Sweden)

    Carrozza Maria Chiara

    2011-05-01

    Full Text Available Abstract Background Prosthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand. Methods SmartHand design was bio-inspired in terms of its physical appearance, kinematics, sensorization, and its multilevel control system. Underactuated fingers and differential mechanisms were designed and exploited in order to fit all mechatronic components in the size and weight of a natural human hand. Its sensory system was designed with the aim of delivering significant afferent information to the user through adequate interfaces. Results SmartHand is a five fingered self-contained robotic hand, with 16 degrees of freedom, actuated by 4 motors. It integrates a bio-inspired sensory system composed of 40 proprioceptive and exteroceptive sensors and a customized embedded controller both employed for implementing automatic grasp control and for potentially delivering sensory feedback to the amputee. It is able to perform everyday grasps, count and independently point the index. The weight (530 g and speed (closing time: 1.5 seconds are comparable to actual commercial prostheses. It is able to lift a 10 kg suitcase; slippage tests showed that within particular friction and geometric conditions the hand is able to stably grasp up to 3.6 kg cylindrical objects. Conclusions Due to its unique embedded features and human-size, the SmartHand holds the promise to be experimentally fitted on transradial

  6. Effects of prosthesis use on the capability to control myoelectric robotic prosthetic hands.

    Science.gov (United States)

    Atzori, Manfredo; Hager, Anne-Gabrielle Mittaz; Elsig, Simone; Giatsidis, Giorgio; Bassetto, Franco; Muller, Henning

    2015-08-01

    The natural control of robotic prosthetic hands with non-invasive techniques is still a challenge: myoelectric prostheses currently give some control capabilities; the application of pattern recognition techniques is promising and recently started to be applied in practice but still many questions are open in the field. In particular, the effects of clinical factors on movement classification accuracy and the capability to control myoelectric prosthetic hands are analyzed in very few studies. The effect of regularly using prostheses on movement classification accuracy has been previously studied, showing differences between users of myoelectric and cosmetic prostheses. In this paper we compare users of myoelectric and body-powered prostheses and intact subjects. 36 machine-learning methods are applied on 6 amputees and 40 intact subjects performing 40 movements. Then, statistical analyses are performed in order to highlight significant differences between the groups of subjects. The statistical analyses do not show significant differences between the two groups of amputees, while significant differences are obtained between amputees and intact subjects. These results constitute new information in the field and suggest new interpretations to previous hypotheses, thus adding precious information towards natural control of robotic prosthetic hands.

  7. Experimental setup for evaluating an adaptive user interface for teleoperation control

    Science.gov (United States)

    Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.

    2017-05-01

    A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.

  8. Fault-Tolerant Robot Programming through Simulation with Realistic Sensor Models

    Directory of Open Access Journals (Sweden)

    Axel Waggershauser

    2008-11-01

    Full Text Available We introduce a simulation system for mobile robots that allows a realistic interaction of multiple robots in a common environment. The simulated robots are closely modeled after robots from the EyeBot family and have an identical application programmer interface. The simulation supports driving commands at two levels of abstraction as well as numerous sensors such as shaft encoders, infrared distance sensors, and compass. Simulation of on-board digital cameras via synthetic images allows the use of image processing routines for robot control within the simulation. Specific error models for actuators, distance sensors, camera sensor, and wireless communication have been implemented. Progressively increasing error levels for an application program allows for testing and improving its robustness and fault-tolerance.

  9. A cargo-sorting DNA robot.

    Science.gov (United States)

    Thubagere, Anupama J; Li, Wei; Johnson, Robert F; Chen, Zibo; Doroudi, Shayan; Lee, Yae Lim; Izatt, Gregory; Wittman, Sarah; Srinivas, Niranjan; Woods, Damien; Winfree, Erik; Qian, Lulu

    2017-09-15

    Two critical challenges in the design and synthesis of molecular robots are modularity and algorithm simplicity. We demonstrate three modular building blocks for a DNA robot that performs cargo sorting at the molecular level. A simple algorithm encoding recognition between cargos and their destinations allows for a simple robot design: a single-stranded DNA with one leg and two foot domains for walking, and one arm and one hand domain for picking up and dropping off cargos. The robot explores a two-dimensional testing ground on the surface of DNA origami, picks up multiple cargos of two types that are initially at unordered locations, and delivers them to specified destinations until all molecules are sorted into two distinct piles. The robot is designed to perform a random walk without any energy supply. Exploiting this feature, a single robot can repeatedly sort multiple cargos. Localization on DNA origami allows for distinct cargo-sorting tasks to take place simultaneously in one test tube or for multiple robots to collectively perform the same task. Copyright © 2017, American Association for the Advancement of Science.

  10. An integrated movement capture and control platform applied towards autonomous movements of surgical robots.

    Science.gov (United States)

    Daluja, Sachin; Golenberg, Lavie; Cao, Alex; Pandya, Abhilash K; Auner, Gregory W; Klein, Michael D

    2009-01-01

    Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.

  11. Graphical user interface for a robotic workstation in a surgical environment.

    Science.gov (United States)

    Bielski, A; Lohmann, C P; Maier, M; Zapp, D; Nasseri, M A

    2016-08-01

    Surgery using a robotic system has proven to have significant potential but is still a highly challenging task for the surgeon. An eye surgery assistant has been developed to eliminate the problem of tremor caused by human motions endangering the outcome of ophthalmic surgery. In order to exploit the full potential of the robot and improve the workflow of the surgeon, providing the ability to change control parameters live in the system as well as the ability to connect additional ancillary systems is necessary. Additionally the surgeon should always be able to get an overview over the status of all systems with a quick glance. Therefore a workstation has been built. The contribution of this paper is the design and the implementation of an intuitive graphical user interface for this workstation. The interface has been designed with feedback from surgeons and technical staff in order to ensure its usability in a surgical environment. Furthermore, the system was designed with the intent of supporting additional systems with minimal additional effort.

  12. Design of a 3-DOF Parallel Hand-Controller

    Directory of Open Access Journals (Sweden)

    Chengcheng Zhu

    2017-01-01

    Full Text Available Hand-controllers, as human-machine-interface (HMI devices, can transfer the position information of the operator’s hands into the virtual environment to control the target objects or a real robot directly. At the same time, the haptic information from the virtual environment or the sensors on the real robot can be displayed to the operator. It helps human perceive haptic information more truly with feedback force. A parallel hand-controller is designed in this paper. It is simplified from the traditional delta haptic device. The swing arms in conventional delta devices are replaced with the slider rail modules. The base consists of two hexagons and several links. For the use of the linear sliding modules instead of swing arms, the arc movement is replaced by linear movement. So that, the calculating amount of the position positive solution and the force inverse solution is reduced for the simplification of the motion. The kinematics, static mechanics, and dynamic mechanics are analyzed in this paper. What is more, two demonstration applications are developed to verify the performance of the designed hand-controller.

  13. Methodology for designing and manufacturing complex biologically inspired soft robotic fluidic actuators: prosthetic hand case study.

    Science.gov (United States)

    Thompson-Bean, E; Das, R; McDaid, A

    2016-10-31

    We present a novel methodology for the design and manufacture of complex biologically inspired soft robotic fluidic actuators. The methodology is applied to the design and manufacture of a prosthetic for the hand. Real human hands are scanned to produce a 3D model of a finger, and pneumatic networks are implemented within it to produce a biomimetic bending motion. The finger is then partitioned into material sections, and a genetic algorithm based optimization, using finite element analysis, is employed to discover the optimal material for each section. This is based on two biomimetic performance criteria. Two sets of optimizations using two material sets are performed. Promising optimized material arrangements are fabricated using two techniques to validate the optimization routine, and the fabricated and simulated results are compared. We find that the optimization is successful in producing biomimetic soft robotic fingers and that fabrication of the fingers is possible. Limitations and paths for development are discussed. This methodology can be applied for other fluidic soft robotic devices.

  14. A four-dimensional virtual hand brain-machine interface using active dimension selection.

    Science.gov (United States)

    Rouse, Adam G

    2016-06-01

    Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  15. Architecture and prototype of human-machine interface with mobile robotic device

    International Nuclear Information System (INIS)

    Dyumin, A.A.; Sorokoumov, P.S.; Chepin, E.V.; Urvanov, G.A.

    2013-01-01

    The possibility of controlling mobile robotic (MRD) device is analyzed and a prototype control system is described. It is established that, for controlling MRD, it is expedient to use a brain-computer interface. A system of interpretation of information obtained from the operator brain has been developed and used in the proposed prototype control system [ru

  16. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  17. Line-feature-based calibration method of structured light plane parameters for robot hand-eye system

    Science.gov (United States)

    Qi, Yuhan; Jing, Fengshui; Tan, Min

    2013-03-01

    For monocular-structured light vision measurement, it is essential to calibrate the structured light plane parameters in addition to the camera intrinsic parameters. A line-feature-based calibration method of structured light plane parameters for a robot hand-eye system is proposed. Structured light stripes are selected as calibrating primitive elements, and the robot moves from one calibrating position to another with constraint in order that two misaligned stripe lines are generated. The images of stripe lines could then be captured by the camera fixed at the robot's end link. During calibration, the equations of two stripe lines in the camera coordinate system are calculated, and then the structured light plane could be determined. As the robot's motion may affect the effectiveness of calibration, so the robot's motion constraints are analyzed. A calibration experiment and two vision measurement experiments are implemented, and the results reveal that the calibration accuracy can meet the precision requirement of robot thick plate welding. Finally, analysis and discussion are provided to illustrate that the method has a high efficiency fit for industrial in-situ calibration.

  18. A three-arm (laparoscopic, hand-assisted, and robotic) matched-case analysis of intraoperative and postoperative outcomes in minimally invasive colorectal surgery.

    Science.gov (United States)

    Patel, Chirag B; Ragupathi, Madhu; Ramos-Valadez, Diego I; Haas, Eric M

    2011-02-01

    Robotic-assisted laparoscopic surgery is an emerging modality in the field of minimally invasive colorectal surgery. However, there is a dearth of data comparing outcomes with other minimally invasive techniques. We present a 3-arm (conventional, hand-assisted, and robotic) matched-case analysis of intraoperative and short-term outcomes in patients undergoing minimally invasive colorectal procedures. Between August 2008 and October 2009, 70 robotic cases of the rectum and rectosigmoid were performed. Thirty of these were organized into triplets with conventional and hand-assisted cases based on the following 6 matching criteria: 1) surgeon; 2) sex; 3) body mass index; 4) operative procedure; 5) pathology; and 6) history of neoadjuvant therapy in malignant cases. Demographics, intraoperative parameters, and postoperative outcomes were assessed. Pathological outcomes were analyzed in malignant cases. Data were stratified by postoperative diagnosis and operative procedure. There was no significant difference in intraoperative complications, estimated blood loss (126.1 ± 98.5 mL overall), or postoperative morbidity and mortality among the groups. Robotic technique required longer operative time compared with conventional laparoscopic (P hand-assisted (P robotic approach results in short-term outcomes comparable to conventional and hand-assisted laparoscopic approaches for benign and malignant diseases of the rectum and rectosigmoid. With 3-dimensional visualization, additional freedom of motion, and improved ergonomics, this enabling technology may play an important role when performing colorectal procedures involving the pelvic anatomy.

  19. Introduction to Autonomous Mobile Robotics Using "Lego Mindstorms" NXT

    Science.gov (United States)

    Akin, H. Levent; Meriçli, Çetin; Meriçli, Tekin

    2013-01-01

    Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the…

  20. Ownership and Agency of an Independent Supernumerary Hand Induced by an Imitation Brain-Computer Interface.

    Science.gov (United States)

    Bashford, Luke; Mehring, Carsten

    2016-01-01

    To study body ownership and control, illusions that elicit these feelings in non-body objects are widely used. Classically introduced with the Rubber Hand Illusion, these illusions have been replicated more recently in virtual reality and by using brain-computer interfaces. Traditionally these illusions investigate the replacement of a body part by an artificial counterpart, however as brain-computer interface research develops it offers us the possibility to explore the case where non-body objects are controlled in addition to movements of our own limbs. Therefore we propose a new illusion designed to test the feeling of ownership and control of an independent supernumerary hand. Subjects are under the impression they control a virtual reality hand via a brain-computer interface, but in reality there is no causal connection between brain activity and virtual hand movement but correct movements are observed with 80% probability. These imitation brain-computer interface trials are interspersed with movements in both the subjects' real hands, which are in view throughout the experiment. We show that subjects develop strong feelings of ownership and control over the third hand, despite only receiving visual feedback with no causal link to the actual brain signals. Our illusion is crucially different from previously reported studies as we demonstrate independent ownership and control of the third hand without loss of ownership in the real hands.

  1. From self-observation to imitation: visuomotor association on a robotic hand.

    Science.gov (United States)

    Chaminade, Thierry; Oztop, Erhan; Cheng, Gordon; Kawato, Mitsuo

    2008-04-15

    Being at the crux of human cognition and behaviour, imitation has become the target of investigations ranging from experimental psychology and neurophysiology to computational sciences and robotics. It is often assumed that the imitation is innate, but it has more recently been argued, both theoretically and experimentally, that basic forms of imitation could emerge as a result of self-observation. Here, we tested this proposal on a realistic experimental platform, comprising an associative network linking a 16 degrees of freedom robotic hand and a simple visual system. We report that this minimal visuomotor association is sufficient to bootstrap basic imitation. Our results indicate that crucial features of human imitation, such as generalization to new actions, may emerge from a connectionist associative network. Therefore, we suggest that a behaviour as complex as imitation could be, at the neuronal level, founded on basic mechanisms of associative learning, a notion supported by a recent proposal on the developmental origin of mirror neurons. Our approach can be applied to the development of realistic cognitive architectures for humanoid robots as well as to shed new light on the cognitive processes at play in early human cognitive development.

  2. Learning in robotic manipulation: The role of dimensionality reduction in policy search methods. Comment on "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands" by Marco Santello et al.

    Science.gov (United States)

    Ficuciello, Fanny; Siciliano, Bruno

    2016-07-01

    A question that often arises, among researchers working on artificial hands and robotic manipulation, concerns the real meaning of synergies. Namely, are they a realistic representation of the central nervous system control of manipulation activities at different levels and of the sensory-motor manipulation apparatus of the human being, or do they constitute just a theoretical framework exploiting analytical methods to simplify the representation of grasping and manipulation activities? Apparently, this is not a simple question to answer and, in this regard, many minds from the field of neuroscience and robotics are addressing the issue [1]. The interest of robotics is definitely oriented towards the adoption of synergies to tackle the control problem of devices with high number of degrees of freedom (DoFs) which are required to achieve motor and learning skills comparable to those of humans. The synergy concept is useful for innovative underactuated design of anthropomorphic hands [2], while the resulting dimensionality reduction simplifies the control of biomedical devices such as myoelectric hand prostheses [3]. Synergies might also be useful in conjunction with the learning process [4]. This aspect is less explored since few works on synergy-based learning have been realized in robotics. In learning new tasks through trial-and-error, physical interaction is important. On the other hand, advanced mechanical designs such as tendon-driven actuation, underactuated compliant mechanisms and hyper-redundant/continuum robots might exhibit enhanced capabilities of adapting to changing environments and learning from exploration. In particular, high DoFs and compliance increase the complexity of modelling and control of these devices. An analytical approach to manipulation planning requires a precise model of the object, an accurate description of the task, and an evaluation of the object affordance, which all make the process rather time consuming. The integration of

  3. Surgical bedside master console for neurosurgical robotic system.

    Science.gov (United States)

    Arata, Jumpei; Kenmotsu, Hajime; Takagi, Motoki; Hori, Tatsuya; Miyagi, Takahiro; Fujimoto, Hideo; Kajita, Yasukazu; Hayashi, Yuichiro; Chinzei, Kiyoyuki; Hashizume, Makoto

    2013-01-01

    We are currently developing a neurosurgical robotic system that facilitates access to residual tumors and improves brain tumor removal surgical outcomes. The system combines conventional and robotic surgery allowing for a quick conversion between the procedures. This concept requires a new master console that can be positioned at the surgical bedside and be sterilized. The master console was developed using new technologies, such as a parallel mechanism and pneumatic sensors. The parallel mechanism is a purely passive 5-DOF (degrees of freedom) joystick based on the author's haptic research. The parallel mechanism enables motion input of conventional brain tumor removal surgery with a compact, intuitive interface that can be used in a conventional surgical environment. In addition, the pneumatic sensors implemented on the mechanism provide an intuitive interface and electrically isolate the tool parts from the mechanism so they can be easily sterilized. The 5-DOF parallel mechanism is compact (17 cm width, 19cm depth, and 15cm height), provides a 505,050 mm and 90° workspace and is highly backdrivable (0.27N of resistance force representing the surgical motion). The evaluation tests revealed that the pneumatic sensors can properly measure the suction strength, grasping force, and hand contact. In addition, an installability test showed that the master console can be used in a conventional surgical environment. The proposed master console design was shown to be feasible for operative neurosurgery based on comprehensive testing. This master console is currently being tested for master-slave control with a surgical robotic system.

  4. The Ninapro database: A resource for sEMG naturally controlled robotic hand prosthetics.

    Science.gov (United States)

    Atzori, Manfredo; Muller, Henning

    2015-01-01

    The dexterous natural control of robotic prosthetic hands with non-invasive techniques is still a challenge: surface electromyography gives some control capabilities but these are limited, often not natural and require long training times; the application of pattern recognition techniques recently started to be applied in practice. While results in the scientific literature are promising they have to be improved to reach the real needs. The Ninapro database aims to improve the field of naturally controlled robotic hand prosthetics by permitting to worldwide research groups to develop and test movement recognition and force control algorithms on a benchmark database. Currently, the Ninapro database includes data from 67 intact subjects and 11 amputated subject performing approximately 50 different movements. The data are aimed at permitting the study of the relationships between surface electromyography, kinematics and dynamics. The Ninapro acquisition protocol was created in order to be easy to be reproduced. Currently, the number of datasets included in the database is increasing thanks to the collaboration of several research groups.

  5. An integrated neuro-robotic interface for stroke rehabilitation using the NASA X1 powered lower limb exoskeleton.

    Science.gov (United States)

    He, Yongtian; Nathan, Kevin; Venkatakrishnan, Anusha; Rovekamp, Roger; Beck, Christopher; Ozdemir, Recep; Francisco, Gerard E; Contreras-Vidal, Jose L

    2014-01-01

    Stroke remains a leading cause of disability, limiting independent ambulation in survivors, and consequently affecting quality of life (QOL). Recent technological advances in neural interfacing with robotic rehabilitation devices are promising in the context of gait rehabilitation. Here, the X1, NASA's powered robotic lower limb exoskeleton, is introduced as a potential diagnostic, assistive, and therapeutic tool for stroke rehabilitation. Additionally, the feasibility of decoding lower limb joint kinematics and kinetics during walking with the X1 from scalp electroencephalographic (EEG) signals--the first step towards the development of a brain-machine interface (BMI) system to the X1 exoskeleton--is demonstrated.

  6. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  7. Concept for a large master/slave-controlled robotic hand

    Science.gov (United States)

    Grissom, William A.; Abdallah, Mahmoud A.; White, Carl L.

    1988-01-01

    A strategy is presented for the design and construction of a large master/slave-controlled, five-finger robotic hand. Each of the five fingers will possess four independent axes each driven by a brushless DC servomotor and, thus, four degrees-of-freedom. It is proposed that commercially available components be utilized as much as possible to fabricate a working laboratory model of the device with an anticipated overall length of two-to-four feet (0.6 to 1.2 m). The fingers are to be designed so that proximity, tactile, or force/torque sensors can be imbedded in their structure. In order to provide for the simultaneous control of the twenty independent hand joints, a multilevel master/slave control strategy is proposed in which the operator wears a specially instrumented glove which produces control signals corresponding to the finger configurations and which is capable of conveying sensor feedback signals to the operator. Two dexterous hand master devices are currently commercially available for this application with both undergoing continuing development. A third approach to be investigated for the master control mode is the use of real-time image processing of a specially patterned master glove to provide the respective control signals for positioning the multiple finger joints.

  8. A Human Machine Interface for EVA

    Science.gov (United States)

    Hartmann, L.

    , the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and

  9. Versatile robotic interface to evaluate, enable and train locomotion and balance after neuromotor disorders

    NARCIS (Netherlands)

    Dominici, Nadia; Keller, Urs; Vallery, Heike; Friedli, Lucia; van den Brand, Rubia; Starkey, Michelle L; Musienko, Pavel; Riener, Robert; Courtine, Grégoire

    Central nervous system (CNS) disorders distinctly impair locomotor pattern generation and balance, but technical limitations prevent independent assessment and rehabilitation of these subfunctions. Here we introduce a versatile robotic interface to evaluate, enable and train pattern generation and

  10. In the Hands of Service Robots

    DEFF Research Database (Denmark)

    Peronard, Jean-Paul

    the benefits of applying robots in professional service e.g. healthcare are extensive, the research into consumer motivation is limited. There is a need for a greater understanding of the individual differences in beliefs and perception in relation to service technology in general and in particular for service...... robots. Therefore, this article proposes a general typology of consumer attitudes and expectations towards service robots. Four types of values are identified and labelled critical, practical, affectionate, and desirable. Based on these values four consumer types are then theoretically develop and may...

  11. Potential Applications of Light Robotics in Nanomedicine

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    We have recently pioneered a new generation of 3D micro-printed light robotic structures with multi-functional biophotonics capabilities. The uniqueness of this light robotic approach is that even if a micro-biologist aims at exploring e.g. cell biology at nanoscopic scales, the main support...... of each micro-robotic structure can be 3D printed to have a size and shape that allows convenient laser manipulation in full 3D – even using relatively modest numerical aperture optics. An optical robot is typically equipped with a number of 3D printed "track-balls" that allow for real-time 3D light...... manipulation with six-degrees-of-freedom. This creates a drone-like functionality where each light-driven robot can be e.g. joystick-controlled and provide the user a feeling of stretching his/her hands directly into and interacting with the biologic micro-environment. The light-guided robots can thus act...

  12. Design and characterization of the OpenWrist: A robotic wrist exoskeleton for coordinated hand-wrist rehabilitation.

    Science.gov (United States)

    Pezent, Evan; Rose, Chad G; Deshpande, Ashish D; O'Malley, Marcia K

    2017-07-01

    Robotic devices have been clinically verified for use in long duration and high intensity rehabilitation needed for motor recovery after neurological injury. Targeted and coordinated hand and wrist therapy, often overlooked in rehabilitation robotics, is required to regain the ability to perform activities of daily living. To this end, a new coupled hand-wrist exoskeleton has been designed. This paper details the design of the wrist module and several human-related considerations made to maximize its potential as a coordinated hand-wrist device. The serial wrist mechanism has been engineered to facilitate donning and doffing for impaired subjects and to insure compatibility with the hand module in virtual and assisted grasping tasks. Several other practical requirements have also been addressed, including device ergonomics, clinician-friendliness, and ambidextrous reconfigurability. The wrist module's capabilities as a rehabilitation device are quantified experimentally in terms of functional workspace and dynamic properties. Specifically, the device possesses favorable performance in terms of range of motion, torque output, friction, and closed-loop position bandwidth when compared with existing devices. The presented wrist module's performance and operational considerations support its use in a wide range of future clinical investigations.

  13. A Multi-purpose Rescue Vehicle and a human–robot interface architecture for remote assistance in ITER

    International Nuclear Information System (INIS)

    Soares, João; Vale, Alberto; Ventura, Rodrigo

    2015-01-01

    Highlights: • Design of an omnidirectional vehicle equipped with cameras and laser range finders. • Two robotic manipulators that slide over the vehicle's body to perform independent tasks. • Architecture to connect the control system, communication, power, navigation and HMI. • An immersive interface HMI with augmented reality features with head mounted display. - Abstract: The remote handling (RH) plays an important role in nuclear test facilities, such as in ITER, for in-vessel and ex-vessel maintenance operations. Unexpected situations may occur when RH devices fail. Since no human being is allowed during the RH operations, a Multi-purpose Rescue Vehicle (MPRV) must be required for providing support in site. This paper proposes a design of a MPRV, i.e., a mobile platform equipped with different sensors and two manipulators with different sets of end-effectors. A human–machine interface is also proposed to remotely operate the MPRV and to carry out rescue and recovery operations.

  14. A Multi-purpose Rescue Vehicle and a human–robot interface architecture for remote assistance in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Soares, João [Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto, E-mail: avale@ipfn.tecnico.ulisboa.pt [Instituto de Plasmas e Fusão Nuclear, Instituto SuperiorTécnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ventura, Rodrigo, E-mail: rodrigo.ventura@isr.tecnico.ulisboa.pt [Laboratório de Robótica e Sistemas em Engenharia eCiência, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2015-10-15

    Highlights: • Design of an omnidirectional vehicle equipped with cameras and laser range finders. • Two robotic manipulators that slide over the vehicle's body to perform independent tasks. • Architecture to connect the control system, communication, power, navigation and HMI. • An immersive interface HMI with augmented reality features with head mounted display. - Abstract: The remote handling (RH) plays an important role in nuclear test facilities, such as in ITER, for in-vessel and ex-vessel maintenance operations. Unexpected situations may occur when RH devices fail. Since no human being is allowed during the RH operations, a Multi-purpose Rescue Vehicle (MPRV) must be required for providing support in site. This paper proposes a design of a MPRV, i.e., a mobile platform equipped with different sensors and two manipulators with different sets of end-effectors. A human–machine interface is also proposed to remotely operate the MPRV and to carry out rescue and recovery operations.

  15. ISS Robotic Student Programming

    Science.gov (United States)

    Barlow, J.; Benavides, J.; Hanson, R.; Cortez, J.; Le Vasseur, D.; Soloway, D.; Oyadomari, K.

    2016-01-01

    The SPHERES facility is a set of three free-flying satellites launched in 2006. In addition to scientists and engineering, middle- and high-school students program the SPHERES during the annual Zero Robotics programming competition. Zero Robotics conducts virtual competitions via simulator and on SPHERES aboard the ISS, with students doing the programming. A web interface allows teams to submit code, receive results, collaborate, and compete in simulator-based initial rounds and semi-final rounds. The final round of each competition is conducted with SPHERES aboard the ISS. At the end of 2017 a new robotic platform called Astrobee will launch, providing new game elements and new ground support for even more student interaction.

  16. Humanoid Robotics: Real-Time Object Oriented Programming

    Science.gov (United States)

    Newton, Jason E.

    2005-01-01

    Programming of robots in today's world is often done in a procedural oriented fashion, where object oriented programming is not incorporated. In order to keep a robust architecture allowing for easy expansion of capabilities and a truly modular design, object oriented programming is required. However, concepts in object oriented programming are not typically applied to a real time environment. The Fujitsu HOAP-2 is the test bed for the development of a humanoid robot framework abstracting control of the robot into simple logical commands in a real time robotic system while allowing full access to all sensory data. In addition to interfacing between the motor and sensory systems, this paper discusses the software which operates multiple independently developed control systems simultaneously and the safety measures which keep the humanoid from damaging itself and its environment while running these systems. The use of this software decreases development time and costs and allows changes to be made while keeping results safe and predictable.

  17. Touchfree medical interfaces.

    Science.gov (United States)

    Rossol, Nathaniel; Cheng, Irene; Rui Shen; Basu, Anup

    2014-01-01

    Real-time control of visual display systems via mid-air hand gestures offers many advantages over traditional interaction modalities. In medicine, for example, it allows a practitioner to adjust display values, e.g. contrast or zoom, on a medical visualization interface without the need to re-sterilize the interface. However, when users are holding a small tool (such as a pen, surgical needle, or computer stylus) the need to constantly put the tool down in order to make hand gesture interactions is not ideal. This work presents a novel interface that automatically adjusts for gesturing with hands and hand-held tools to precisely control medical displays. The novelty of our interface is that it uses a single set of gestures designed to be equally effective for fingers and hand-held tools without using markers. This type of interface was previously not feasible with low-resolution depth sensors such as Kinect, but is now achieved by using the recently released Leap Motion controller. Our interface is validated through a user study on a group of people given the task of adjusting parameters on a medical image.

  18. Applications of artificial intelligence to space station and automated software techniques: High level robot command language

    Science.gov (United States)

    Mckee, James W.

    1989-01-01

    The objective is to develop a system that will allow a person not necessarily skilled in the art of programming robots to quickly and naturally create the necessary data and commands to enable a robot to perform a desired task. The system will use a menu driven graphical user interface. This interface will allow the user to input data to select objects to be moved. There will be an imbedded expert system to process the knowledge about objects and the robot to determine how they are to be moved. There will be automatic path planning to avoid obstacles in the work space and to create a near optimum path. The system will contain the software to generate the required robot instructions.

  19. Robotics, stem cells, and brain-computer interfaces in rehabilitation and recovery from stroke: updates and advances.

    Science.gov (United States)

    Boninger, Michael L; Wechsler, Lawrence R; Stein, Joel

    2014-11-01

    The aim of this study was to describe the current state and latest advances in robotics, stem cells, and brain-computer interfaces in rehabilitation and recovery for stroke. The authors of this summary recently reviewed this work as part of a national presentation. The article represents the information included in each area. Each area has seen great advances and challenges as products move to market and experiments are ongoing. Robotics, stem cells, and brain-computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial.

  20. Robotic Software Integration Using MARIE

    Directory of Open Access Journals (Sweden)

    Carle Côté

    2006-03-01

    Full Text Available This paper presents MARIE, a middleware framework oriented towards developing and integrating new and existing software for robotic systems. By using a generic communication framework, MARIE aims to create a flexible distributed component system that allows robotics developers to share software programs and algorithms, and design prototypes rapidly based on their own integration needs. The use of MARIE is illustrated with the design of a socially interactive autonomous mobile robot platform capable of map building, localization, navigation, tasks scheduling, sound source localization, tracking and separation, speech recognition and generation, visual tracking, message reading and graphical interaction using a touch screen interface.

  1. Tactile Sensing for Dexterous Robotic Hands

    Science.gov (United States)

    Martin, Toby B.

    2000-01-01

    Robotic systems will be used as precursors to human exploration to explore the solar system and expand our knowledge of planetary surfaces. Robotic systems will also be used to build habitats and infrastructure required for human presence in space and on other planetary surfaces . Such robots will require a high level of intelligence and automation. The ability to flexibly manipulate their physical environment is one characteristic that makes humans so effective at these building and exploring tasks . The development of a generic autonomous grasp ing capability will greatly enhance the efficiency and ability of robotics to build, maintain and explore. To tele-operate a robot over vast distances of space, with long communication delays, has proven to be troublesome. Having an autonomous grasping capability that can react in real-time to disturbances or adapt to generic objects, without operator intervention, will reduce the probability of mishandled tools and samples and reduce the number of re-grasp attempts due to dropping. One aspect that separates humans from machines is a rich sensor set. We have the ability to feel objects and respond to forces and textures. The development of touch or tactile sensors for use on a robot that emulates human skin and nerves is the basis for this discussion. We will discuss the use of new piezo-electric and resistive materials that have emerged on the market with the intention of developing a touch sensitive sensor. With viable tacti le sensors we will be one step closer to developing an autonomous grasping capability.

  2. A Hands-Free Interface for Controlling Virtual Electric-Powered Wheelchairs

    Directory of Open Access Journals (Sweden)

    Tauseef Gulrez

    2016-03-01

    Full Text Available This paper focuses on how to provide mobility to people with motor impairments with the integration of robotics and wearable computing systems. The burden of learning to control powered mobility devices should not fall entirely on the people with disabilities. Instead, the system should be able to learn the user's movements. This requires learning the degrees of freedom of user movement, and mapping these degrees of freedom onto electric-powered wheelchair (EPW controls. Such mapping cannot be static because in some cases users will eventually improve with practice. Our goal in this paper is to present a hands-free interface (HFI that can be customized to the varying needs of EPW users with appropriate mapping between the users' degrees of freedom and EPW controls. EPW users with different impairment types must learn how to operate a wheelchair with their residual body motions. EPW interfaces are often customized to fit their needs. An HFI utilizes the signals generated by the user's voluntary shoulder and elbow movements and translates them into an EPW control scheme. We examine the correlation of kinematics that occur during moderately paced repetitive elbow and shoulder movements for a range of motion. The output of upper-limb movements (shoulder and elbows was tested on six participants, and compared with an output of a precision position tracking (PPT optical system for validation. We find strong correlations between the HFI signal counts and PPT optical system during different upper-limb movements (ranged from r = 0.86 to 0.94. We also tested the HFI performance in driving the EPW in a virtual reality environment on a spinal-cord-injured (SCI patient. The results showed that the HFI was able to adapt and translate the residual mobility of the SCI patient into efficient control commands within a week's training. The results are encouraging for the development of more efficient HFIs, especially for wheelchair users.

  3. Design And Control Of Agricultural Robot For Tomato Plants Treatment And Harvesting

    Science.gov (United States)

    Sembiring, Arnes; Budiman, Arif; Lestari, Yuyun D.

    2017-12-01

    Although Indonesia is one of the biggest agricultural country in the world, implementation of robotic technology, otomation and efficiency enhancement in agriculture process hasn’t extensive yet. This research proposed a low cost agricultural robot architecture. The robot could help farmer to survey their farm area, treat the tomato plants and harvest the ripe tomatoes. Communication between farmer and robot was facilitated by wireless line using radio wave to reach wide area (120m radius). The radio wave was combinated with Bluetooth to simplify the communication between robot and farmer’s Android smartphone. The robot was equipped with a camera, so the farmers could survey the farm situation through 7 inch monitor display real time. The farmers controlled the robot and arm movement through an user interface in Android smartphone. The user interface contains control icons that allow farmers to control the robot movement (formard, reverse, turn right and turn left) and cut the spotty leaves or harvest the ripe tomatoes.

  4. Robotic training and kinematic analysis of arm and hand after incomplete spinal cord injury: a case study.

    Science.gov (United States)

    Kadivar, Z; Sullivan, J L; Eng, D P; Pehlivan, A U; O'Malley, M K; Yozbatiran, N; Francisco, G E

    2011-01-01

    Regaining upper extremity function is the primary concern of persons with tetraplegia caused by spinal cord injury (SCI). Robotic rehabilitation has been inadequately tested and underutilized in rehabilitation of the upper extremity in the SCI population. Given the acceptance of robotic training in stroke rehabilitation and SCI gait training, coupled with recent evidence that the spinal cord, like the brain, demonstrates plasticity that can be catalyzed by repetitive movement training such as that available with robotic devices, it is probable that robotic upper-extremity training of persons with SCI could be clinically beneficial. The primary goal of this pilot study was to test the feasibility of using a novel robotic device for the upper extremity (RiceWrist) and to evaluate robotic rehabilitation using the RiceWrist in a tetraplegic person with incomplete SCI. A 24-year-old male with incomplete SCI participated in 10 sessions of robot-assisted therapy involving intensive upper limb training. The subject successfully completed all training sessions and showed improvements in movement smoothness, as well as in the hand function. Results from this study provide valuable information for further developments of robotic devices for upper limb rehabilitation in persons with SCI. © 2011 IEEE

  5. Fast and Efficient Radiological Interventions via a Graphical User Interface Commanded Magnetic Resonance Compatible Robotic Device

    Science.gov (United States)

    Özcan, Alpay; Christoforou, Eftychios; Brown, Daniel; Tsekos, Nikolaos

    2011-01-01

    The graphical user interface for an MR compatible robotic device has the capability of displaying oblique MR slices in 2D and a 3D virtual environment along with the representation of the robotic arm in order to swiftly complete the intervention. Using the advantages of the MR modality the device saves time and effort, is safer for the medical staff and is more comfortable for the patient. PMID:17946067

  6. Optimization-Based Controllers for Robotics Applications (OCRA: The Case of iCub’s Whole-Body Control

    Directory of Open Access Journals (Sweden)

    Jorhabib G. Eljaik

    2018-03-01

    Full Text Available OCRA stands for Optimization-based Control for Robotics Applications. It consists of a set of platform-independent libraries which facilitates the development of optimization-based controllers for articulated robots. Hierarchical, weighted, and hybrid control strategies can easily be implemented using these tools. The generic interfaces provided by OCRA allow different robots to use the exact same controllers. OCRA also allows users to specify high-level objectives via tasks. These tasks provide an intuitive way of generating complex behaviors and can be specified in XML format. To illustrate the use of OCRA, an implementation of interest to this research topic for the humanoid robot iCub is presented. OCRA stands for Optimization-based Control for Robotics Applications. It consists of a set of platform-independent libraries which facilitates the development of optimization-based controllers for articulated robots. Hierarchical, weighted, and hybrid control strategies can easily be implemented using these tools. The generic interfaces provided by OCRA allow different robots to use the exact same controllers. OCRA also allows users to specify high-level objectives via tasks. These tasks provide an intuitive way of generating complex behaviors and can be specified in XML format. To illustrate the use of OCRA, an implementation of interest to this research topic for the humanoid robot iCub is presented.

  7. Interface Based on Electrooculography for Velocity Control of a Robot Arm

    Directory of Open Access Journals (Sweden)

    Eduardo Iáñez

    2010-01-01

    Full Text Available This paper describes a technique based on electrooculography to control a robot arm. This technique detects the movement of the eyes, measuring the difference of potential between the cornea and the retina by placing electrodes around the ocular area. The processing algorithm developed to obtain the position of the eye at the blink of the user is explained. The output of the processing algorithm offers, apart from the direction, four different values (zero to three to control the velocity of the robot arm according to how much the user is looking in one direction. This allows controlling two degrees of freedom of a robot arm with the eyes movement. The blink has been used to mark some targets in tests. In this paper, the experimental results obtained with a real robot arm are shown.

  8. Robot Teleoperation and Perception Assistance with a Virtual Holographic Display

    Science.gov (United States)

    Goddard, Charles O.

    2012-01-01

    Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.

  9. The Effects of Upper-Limb Training Assisted with an Electromyography-Driven Neuromuscular Electrical Stimulation Robotic Hand on Chronic Stroke

    Directory of Open Access Journals (Sweden)

    Chingyi Nam

    2017-12-01

    Full Text Available BackgroundImpaired hand dexterity is a major disability of the upper limb after stroke. An electromyography (EMG-driven neuromuscular electrical stimulation (NMES robotic hand was designed previously, whereas its rehabilitation effects were not investigated.ObjectivesThis study aims to investigate the rehabilitation effectiveness of the EMG-driven NMES-robotic hand-assisted upper-limb training on persons with chronic stroke.MethodA clinical trial with single-group design was conducted on chronic stroke participants (n = 15 who received 20 sessions of EMG-driven NMES-robotic hand-assisted upper-limb training. The training effects were evaluated by pretraining, posttraining, and 3-month follow-up assessments with the clinical scores of the Fugl-Meyer Assessment (FMA, the Action Research Arm Test (ARAT, the Wolf Motor Function Test, the Motor Functional Independence Measure, and the Modified Ashworth Scale (MAS. Improvements in the muscle coordination across the sessions were investigated by EMG parameters, including EMG activation level and Co-contraction Indexes (CIs of the target muscles in the upper limb.ResultsSignificant improvements in the FMA shoulder/elbow and wrist/hand scores (P < 0.05, the ARAT (P < 0.05, and in the MAS (P < 0.05 were observed after the training and sustained 3 months later. The EMG parameters indicated a significant decrease of the muscle activation level in flexor digitorum (FD and biceps brachii (P < 0.05, as well as a significant reduction of CIs in the muscle pairs of FD and triceps brachii and biceps brachii and triceps brachii (P < 0.05.ConclusionThe upper-limb training integrated with the assistance from the EMG-driven NMES-robotic hand is effective for the improvements of the voluntary motor functions and the muscle coordination in the proximal and distal joints. Furthermore, the motor improvement after the training could be maintained till 3 months later.Trial registration

  10. The Effects of Upper-Limb Training Assisted with an Electromyography-Driven Neuromuscular Electrical Stimulation Robotic Hand on Chronic Stroke.

    Science.gov (United States)

    Nam, Chingyi; Rong, Wei; Li, Waiming; Xie, Yunong; Hu, Xiaoling; Zheng, Yongping

    2017-01-01

    Impaired hand dexterity is a major disability of the upper limb after stroke. An electromyography (EMG)-driven neuromuscular electrical stimulation (NMES) robotic hand was designed previously, whereas its rehabilitation effects were not investigated. This study aims to investigate the rehabilitation effectiveness of the EMG-driven NMES-robotic hand-assisted upper-limb training on persons with chronic stroke. A clinical trial with single-group design was conducted on chronic stroke participants ( n  = 15) who received 20 sessions of EMG-driven NMES-robotic hand-assisted upper-limb training. The training effects were evaluated by pretraining, posttraining, and 3-month follow-up assessments with the clinical scores of the Fugl-Meyer Assessment (FMA), the Action Research Arm Test (ARAT), the Wolf Motor Function Test, the Motor Functional Independence Measure, and the Modified Ashworth Scale (MAS). Improvements in the muscle coordination across the sessions were investigated by EMG parameters, including EMG activation level and Co-contraction Indexes (CIs) of the target muscles in the upper limb. Significant improvements in the FMA shoulder/elbow and wrist/hand scores ( P  < 0.05), the ARAT ( P  < 0.05), and in the MAS ( P  < 0.05) were observed after the training and sustained 3 months later. The EMG parameters indicated a significant decrease of the muscle activation level in flexor digitorum (FD) and biceps brachii ( P  < 0.05), as well as a significant reduction of CIs in the muscle pairs of FD and triceps brachii and biceps brachii and triceps brachii ( P  < 0.05). The upper-limb training integrated with the assistance from the EMG-driven NMES-robotic hand is effective for the improvements of the voluntary motor functions and the muscle coordination in the proximal and distal joints. Furthermore, the motor improvement after the training could be maintained till 3 months later. ClinicalTrials.gov. NCT02117089; date of registration: April

  11. Robots Spur Software That Lends a Hand

    Science.gov (United States)

    2014-01-01

    While building a robot to assist astronauts in space, Johnson Space Center worked with partners to develop robot reasoning and interaction technology. The partners created Robonaut 1, which led to Robonaut 2, and the work also led to patents now held by Universal Robotics in Nashville, Tennessee. The NASA-derived technology is available for use in warehousing, mining, and more.

  12. Robot hands and extravehicular activity

    Science.gov (United States)

    Marcus, Beth

    1987-01-01

    Extravehicular activity (EVA) is crucial to the success of both current and future space operations. As space operations have evolved in complexity so has the demand placed on the EVA crewman. In addition, some NASA requirements for human capabilities at remote or hazardous sites were identified. One of the keys to performing useful EVA tasks is the ability to manipulate objects accurately, quickly and without early or excessive fatigue. The current suit employs a glove which enables the crewman to perform grasping tasks, use tools, turn switches, and perform other tasks for short periods of time. However, the glove's bulk and resistance to motion ultimately causes fatigue. Due to this limitation it may not be possible to meet the productivity requirements that will be placed on the EVA crewman of the future with the current or developmental Extravehicular Mobility Unit (EMU) hardware. In addition, this hardware will not meet the requirements for remote or hazardous operations. In an effort to develop ways for improving crew productivity, a contract was awarded to develop a prototype anthromorphic robotic hand (ARH) for use with an extravehicular space suit. The first step in this program was to perform a a design study which investigated the basic technology required for the development of an ARH to enhance crew performance and productivity. The design study phase of the contract and some additional development work is summarized.

  13. Brain-machine interfaces for controlling lower-limb powered robotic systems

    Science.gov (United States)

    He, Yongtian; Eguren, David; Azorín, José M.; Grossman, Robert G.; Phat Luu, Trieu; Contreras-Vidal, Jose L.

    2018-04-01

    Objective. Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. Approach. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Main results. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. Significance. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the

  14. Fiscal 2000 report on result of R and D on robot system cooperating and coexisting with human beings. R and D on robot system cooperating and coexisting with human beings; 2000 nendo ningen kyocho kyozongata robot system kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-06-01

    A highly safe and reliable robot is being developed capable of cooperating with human beings and executing complicated operations in a human working/living space. This paper describes the fiscal 2000 results. Development of robot motion library was continued for extended task for providing services to people in care houses for the aged controlling motions of the humanoid robot. A basic design for a personal service system by the humanoid robot was conducted with the aim of nursing assistance and for the objective of developing a portable terminal type tele-operation device. A public and a home cockpit were researched with the purpose of developing user interfaces for telexistence control. A dynamic simulator for humanoid robots was built, with motions of standing-up and walking examined, in order to develop basic theories for the dual-handed tasks aided by the leg-arm cooperative motion. To develop a robot that properly and safely cooperates and coexists with the human beings, it is essential to obtain a dynamically reasonable and natural control law, so that the basic studies were conducted in this direction. With the purpose of developing a motion capture and learning system, a virtual robot platform and an information acquiring interface were developed. Studies were also conducted on modeling technique for achieving realistic material properties from high-precision image synthesis and actual images. (NEDO)

  15. A Low-Cost Open Source 3D-Printable Dexterous Anthropomorphic Robotic Hand with a Parallel Spherical Joint Wrist for Sign Languages Reproduction

    Directory of Open Access Journals (Sweden)

    Andrea Bulgarelli

    2016-06-01

    Full Text Available We present a novel open-source 3D-printable dexterous anthropomorphic robotic hand specifically designed to reproduce Sign Languages’ hand poses for deaf and deaf-blind users. We improved the InMoov hand, enhancing dexterity by adding abduction/adduction degrees of freedom of three fingers (thumb, index and middle fingers and a three-degrees-of-freedom parallel spherical joint wrist. A systematic kinematic analysis is provided. The proposed robotic hand is validated in the framework of the PARLOMA project. PARLOMA aims at developing a telecommunication system for deaf-blind people, enabling remote transmission of signs from tactile Sign Languages. Both hardware and software are provided online to promote further improvements from the community.

  16. Robotic Mission to Mars: Hands-on, minds-on, web-based learning

    Science.gov (United States)

    Mathers, Naomi; Goktogen, Ali; Rankin, John; Anderson, Marion

    2012-11-01

    Problem-based learning has been demonstrated as an effective methodology for developing analytical skills and critical thinking. The use of scenario-based learning incorporates problem-based learning whilst encouraging students to collaborate with their colleagues and dynamically adapt to their environment. This increased interaction stimulates a deeper understanding and the generation of new knowledge. The Victorian Space Science Education Centre (VSSEC) uses scenario-based learning in its Mission to Mars, Mission to the Orbiting Space Laboratory and Primary Expedition to the M.A.R.S. Base programs. These programs utilize methodologies such as hands-on applications, immersive-learning, integrated technologies, critical thinking and mentoring to engage students in Science, Technology, Engineering and Mathematics (STEM) and highlight potential career paths in science and engineering. The immersive nature of the programs demands specialist environments such as a simulated Mars environment, Mission Control and Space Laboratory, thus restricting these programs to a physical location and limiting student access to the programs. To move beyond these limitations, VSSEC worked with its university partners to develop a web-based mission that delivered the benefits of scenario-based learning within a school environment. The Robotic Mission to Mars allows students to remotely control a real rover, developed by the Australian Centre for Field Robotics (ACFR), on the VSSEC Mars surface. After completing a pre-mission training program and site selection activity, students take on the roles of scientists and engineers in Mission Control to complete a mission and collect data for further analysis. Mission Control is established using software developed by the ACRI Games Technology Lab at La Trobe University using the principles of serious gaming. The software allows students to control the rover, monitor its systems and collect scientific data for analysis. This program encourages

  17. Introduction to autonomous mobile robotics using Lego Mindstorms NXT

    Science.gov (United States)

    Akın, H. Levent; Meriçli, Çetin; Meriçli, Tekin

    2013-12-01

    Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the Lego Mindstorms NXT kits are used as the robot platform. The aims, scope and contents of the course are presented, and the design of the laboratory sessions as well as the term projects, which address several core problems of robotics and artificial intelligence simultaneously, are explained in detail.

  18. Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control

    Science.gov (United States)

    Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.

    2012-04-01

    A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.

  19. Kinematic design of a finger abduction mechanism for an anthropomorphic robotic hand

    Directory of Open Access Journals (Sweden)

    L.-A. A. Demers

    2011-02-01

    Full Text Available This paper presents the kinematic design of an abduction mechanism for the fingers of an underactuated anthropomorphic robotic hand. This mechanism will enhance the range of feasible grasps of the underactuated hand without significantly increasing its complexity. The analysis of the link between the index finger and the third finger is first assessed, where the parameters are studied in order to follow the amplitude constraint and to minimize the coordination error. Then, the study of the mechanism joining the third finger and the little finger is summarized. Finally, a prototype of the finger's abduction system is presented.

    This paper was presented at the IFToMM/ASME International Workshop on Underactuated Grasping (UG2010, 19 August 2010, Montréal, Canada.

  20. Hand Passive Mobilization Performed with Robotic Assistance: Acute Effects on Upper Limb Perfusion and Spasticity in Stroke Survivors

    Directory of Open Access Journals (Sweden)

    Massimiliano Gobbo

    2017-01-01

    Full Text Available This single arm pre-post study aimed at evaluating the acute effects induced by a single session of robot-assisted passive hand mobilization on local perfusion and upper limb (UL function in poststroke hemiparetic participants. Twenty-three patients with subacute or chronic stroke received 20 min passive mobilization of the paretic hand with robotic assistance. Near-infrared spectroscopy (NIRS was used to detect changes in forearm tissue perfusion. Muscle tone of the paretic UL was assessed by the Modified Ashworth Scale (MAS. Symptoms concerning UL heaviness, joint stiffness, and pain were evaluated as secondary outcomes by self-reporting. Significant (p=0.014 improvements were found in forearm perfusion when all fingers were mobilized simultaneously. After the intervention, MAS scores decreased globally, being the changes statistically significant for the wrist (from 1.6±1.0 to 1.1±1.0; p=0.001 and fingers (from 1.2±1.1 to 0.7±0.9; p=0.004. Subjects reported decreased UL heaviness and stiffness after treatment, especially for the hand, as well as diminished pain when present. This study supports novel evidence that hand robotic assistance promotes local UL circulation changes, may help in the management of spasticity, and acutely alleviates reported symptoms of heaviness, stiffness, and pain in subjects with poststroke hemiparesis. This opens new scenarios for the implications in everyday clinical practice. Clinical Trial Registration Number is NCT03243123.

  1. Hand Passive Mobilization Performed with Robotic Assistance: Acute Effects on Upper Limb Perfusion and Spasticity in Stroke Survivors.

    Science.gov (United States)

    Gobbo, Massimiliano; Gaffurini, Paolo; Vacchi, Laura; Lazzarini, Sara; Villafane, Jorge; Orizio, Claudio; Negrini, Stefano; Bissolotti, Luciano

    2017-01-01

    This single arm pre-post study aimed at evaluating the acute effects induced by a single session of robot-assisted passive hand mobilization on local perfusion and upper limb (UL) function in poststroke hemiparetic participants. Twenty-three patients with subacute or chronic stroke received 20 min passive mobilization of the paretic hand with robotic assistance. Near-infrared spectroscopy (NIRS) was used to detect changes in forearm tissue perfusion. Muscle tone of the paretic UL was assessed by the Modified Ashworth Scale (MAS). Symptoms concerning UL heaviness, joint stiffness, and pain were evaluated as secondary outcomes by self-reporting. Significant ( p = 0.014) improvements were found in forearm perfusion when all fingers were mobilized simultaneously. After the intervention, MAS scores decreased globally, being the changes statistically significant for the wrist (from 1.6 ± 1.0 to 1.1 ± 1.0; p = 0.001) and fingers (from 1.2 ± 1.1 to 0.7 ± 0.9; p = 0.004). Subjects reported decreased UL heaviness and stiffness after treatment, especially for the hand, as well as diminished pain when present. This study supports novel evidence that hand robotic assistance promotes local UL circulation changes, may help in the management of spasticity, and acutely alleviates reported symptoms of heaviness, stiffness, and pain in subjects with poststroke hemiparesis. This opens new scenarios for the implications in everyday clinical practice. Clinical Trial Registration Number is NCT03243123.

  2. Grasp specific and user friendly interface design for myoelectric hand prostheses.

    Science.gov (United States)

    Mohammadi, Alireza; Lavranos, Jim; Howe, Rob; Choong, Peter; Oetomo, Denny

    2017-07-01

    This paper presents the design and characterisation of a hand prosthesis and its user interface, focusing on performing the most commonly used grasps in activities of daily living (ADLs). Since the operation of a multi-articulated powered hand prosthesis is difficult to learn and master, there is a significant rate of abandonment by amputees in preference for simpler devices. In choosing so, amputees chose to live with fewer features in their prosthesis that would more reliably perform the basic operations. In this paper, we look simultaneously at a hand prosthesis design method that aims for a small number of grasps, a low complexity user interface and an alternative method to the current use of EMG as a preshape selection method through the use of a simple button; to enable amputees to get to and execute the intended hand movements intuitively, quickly and reliably. An experiment is reported at the end of the paper comparing the speed and accuracy with which able-bodied naive subjects are able to select the intended preshapes through the use of a simplified EMG method and a simple button. It is shown that the button was significantly superior in the speed of successful task completion and marginally superior in accuracy (success of first attempt).

  3. Acropolis: A Fast Protoyping Robotic Application

    Directory of Open Access Journals (Sweden)

    Vincent Zalzal

    2009-03-01

    Full Text Available Acropolis is an open source middleware robotic framework for fast software prototyping and reuse of program codes. It is made up of a core software and a collection of several extension modules called plugins. Each plugin encapsulates a specific functionality needed for robotic applications. To design a robot behavior, a circuit of the involved plugins is built with a graphical user interface. A high degree of decoupling between components and a graph-based representation allow the user to build complex robot behaviors with minimal need for code writing. In addition, the Acropolis core is hardware platform independent. Well-known design patterns and layered software architecture are its key features. Through the description of three applications, we illustrate some of its usability.

  4. A prototype home robot with an ambient facial interface to improve drug compliance.

    Science.gov (United States)

    Takacs, Barnabas; Hanak, David

    2008-01-01

    We have developed a prototype home robot to improve drug compliance. The robot is a small mobile device, capable of autonomous behaviour, as well as remotely controlled operation via a wireless datalink. The robot is capable of face detection and also has a display screen to provide facial feedback to help motivate patients and thus increase their level of compliance. An RFID reader can identify tags attached to different objects, such as bottles, for fluid intake monitoring. A tablet dispenser allows drug compliance monitoring. Despite some limitations, experience with the prototype suggests that simple and low-cost robots may soon become feasible for care of people living alone or in isolation.

  5. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

    Directory of Open Access Journals (Sweden)

    Hong Zeng

    2017-10-01

    Full Text Available Brain-machine interface (BMI can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback over the open-loop system (with visual inspection only have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes.

  6. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

    Science.gov (United States)

    Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng

    2017-01-01

    Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123

  7. Hand/Eye Coordination For Fine Robotic Motion

    Science.gov (United States)

    Lokshin, Anatole M.

    1992-01-01

    Fine motions of robotic manipulator controlled with help of visual feedback by new method reducing position errors by order of magnitude. Robotic vision subsystem includes five cameras: three stationary ones providing wide-angle views of workspace and two mounted on wrist of auxiliary robot arm. Stereoscopic cameras on arm give close-up views of object and end effector. Cameras measure errors between commanded and actual positions and/or provide data for mapping between visual and manipulator-joint-angle coordinates.

  8. Hand-eye coordination of a robot for the automatic inspection of steam-generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, D.H.; Song, Y.C.; Kim, J.H.; Kim, J.G.

    2004-01-01

    The inspection of steam-generator tubes in nuclear power plants needs to collect test signals in a highly radiated region that is not accessible by humans. In general, a robot equipped with a camera and a test probe is used to handle such a dangerous environment. The robot moves the probe to right below a tube to be inspected and then the probe is inserted into the tube. The inspection signals are acquired while the probe is pulling back. Currently, an operator in a control room controls all the process remotely. To make a fully automatic inspection system, first of all, a control mechanism is needed to position the probe to the proper location. This is so called a hand-eye coordination problem. In this paper, a hand-eye coordination method for a robot has been presented. The proposed method consists of the two consecutive control modes: rough positioning and fine-tuning. The rough positioning controller tries to position its probe near a target place using kinematics information and the known environments, and then the fine-tuning controller tries to adjust the probe to the target using the image acquired by the camera attached to the robot. The usefulness of the proposed method has been tested and verified through experiments. (orig.)

  9. Hand-held multi-DOF robotic forceps for neurosurgery designed for dexterous manipulation in deep and narrow space.

    Science.gov (United States)

    Okubo, Takuro; Harada, Kanako; Fujii, Masahiro; Tanaka, Shinichi; Ishimaru, Tetsuya; Iwanaka, Tadashi; Nakatomi, Hirohumi; Sora, Sigeo; Morita, Akio; Sugita, Naohiko; Mitsuishi, Mamoru

    2014-01-01

    Neurosurgical procedures require precise and dexterous manipulation of a surgical suture in narrow and deep spaces in the brain. This is necessary for surgical tasks such as the anastomosis of microscopic blood vessels and dura mater suturing. A hand-held multi-degree of freedom (DOF) robotic forceps was developed to aid the performance of such difficult tasks. The diameter of the developed robotic forceps is 3.5 mm, and its tip has three DOFs, namely, bending, rotation, and grip. Experimental results showed that the robotic forceps had an average needle insertion force of 1.7 N. Therefore, an increase in the needle insertion force is necessary for practical application of the developed device.

  10. R4SA for Controlling Robots

    Science.gov (United States)

    Aghazarian, Hrand

    2009-01-01

    The R4SA GUI mentioned in the immediately preceding article is a userfriendly interface for controlling one or more robot(s). This GUI makes it possible to perform meaningful real-time field experiments and research in robotics at an unmatched level of fidelity, within minutes of setup. It provides such powerful graphing modes as that of a digitizing oscilloscope that displays up to 250 variables at rates between 1 and 200 Hz. This GUI can be configured as multiple intuitive interfaces for acquisition of data, command, and control to enable rapid testing of subsystems or an entire robot system while simultaneously performing analysis of data. The R4SA software establishes an intuitive component-based design environment that can be easily reconfigured for any robotic platform by creating or editing setup configuration files. The R4SA GUI enables event-driven and conditional sequencing similar to those of Mars Exploration Rover (MER) operations. It has been certified as part of the MER ground support equipment and, therefore, is allowed to be utilized in conjunction with MER flight hardware. The R4SA GUI could also be adapted to use in embedded computing systems, other than that of the MER, for commanding and real-time analysis of data.

  11. Optimal Modality Selection for Cooperative Human-Robot Task Completion.

    Science.gov (United States)

    Jacob, Mithun George; Wachs, Juan P

    2016-12-01

    Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p human-robot collision) and the differences in the lexicons are analyzed.

  12. Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.

    Science.gov (United States)

    Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C

    2012-01-01

    Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.

  13. Graphical programming: On-line robot simulation for telerobotic control

    International Nuclear Information System (INIS)

    McDonald, M.J.; Palmquist, R.D.

    1993-01-01

    Sandia has developed an advanced operational control system approach, caged Graphical Programming, to design and operate robotic waste cleanup and other hazardous duty robotic systems. The Graphical Programming approach produces robot systems that are faster to develop and use, safer in operation, and cheaper overall than altemative teleoperation or autonomous robot control systems. The Graphical Programming approach uses 3-D visualization and simulation software with intuitive operator interfaces for the programming and control of complex robotic systems. Graphical Programming Supervisor software modules allow an operator to command and simulate complex tasks in a graphic preview mode and, when acceptable, command the actual robots and monitor their motions with the graphic system. Graphical Progranuning Supervisors maintain registration with the real world and allow the robot to perform tasks that cannot be accurately represented with models alone by using a combination of model and sensor-based control. This paper describes the Graphical Programming approach, several example control systems that use Graphical Programming, and key features necessary for implementing successful Graphical Programming systems

  14. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  15. FY 1999 project on the development of new industry support type international standards. Standardization of a method to evaluate the performance of open robot use communication interface in production system, etc.; 1999 nendo shinki sangyo shiengata kokusai hyojun kaihatsu jigyo seika hokokusho. Seisan system nado ni okeru open robot yo tsushin interface no hyojunka

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    For the purpose of standardizing the communication interface system between personal computers and robots, the R and D were conducted on ORiN (Open Robot Interface for the Network), and the FY 1999 results were summed up. ORiN is composed of the provider part, kernel part and application logic part. The provider absorbs differences in expression and protocols of robot controller data of each company and conveys them to the kernel part. The kernel part is composed of RAO and RDF. RAO adopts the disperse object model DCOM technology and supplies the network transparency and common access method to robot. RDF supplies files with expansion of robot structure models using XML. By this, ORiN was made adoptable for future, permitting differences in each robot. In the International Robot Exhibition held in October 26-29, 1999, the prototype of ORiN was jointly demonstrated by each company. (NEDO)

  16. Design and validation of low-cost assistive glove for hand assessment and therapy during activity of daily living-focused robotic stroke therapy.

    Science.gov (United States)

    Nathan, Dominic E; Johnson, Michelle J; McGuire, John R

    2009-01-01

    Hand and arm impairment is common after stroke. Robotic stroke therapy will be more effective if hand and upper-arm training is integrated to help users practice reaching and grasping tasks. This article presents the design, development, and validation of a low-cost, functional electrical stimulation grasp-assistive glove for use with task-oriented robotic stroke therapy. Our glove measures grasp aperture while a user completes simple-to-complex real-life activities, and when combined with an integrated functional electrical stimulator, it assists in hand opening and closing. A key function is a new grasp-aperture prediction model, which uses the position of the end-effectors of two planar robots to define the distance between the thumb and index finger. We validated the accuracy and repeatability of the glove and its capability to assist in grasping. Results from five nondisabled subjects indicated that the glove is accurate and repeatable for both static hand-open and -closed tasks when compared with goniometric measures and for dynamic reach-to-grasp tasks when compared with motion analysis measures. Results from five subjects with stroke showed that with the glove, they could open their hands but without it could not. We present a glove that is a low-cost solution for in vivo grasp measurement and assistance.

  17. An Ultralightweight and Living Legged Robot.

    Science.gov (United States)

    Vo Doan, Tat Thang; Tan, Melvin Y W; Bui, Xuan Hien; Sato, Hirotaka

    2018-02-01

    In this study, we describe the most ultralightweight living legged robot to date that makes it a strong candidate for a search and rescue mission. The robot is a living beetle with a wireless electronic backpack stimulator mounted on its thorax. Inheriting from the living insect, the robot employs a compliant body made of soft actuators, rigid exoskeletons, and flexure hinges. Such structure would allow the robot to easily adapt to any complex terrain due to the benefit of soft interface, self-balance, and self-adaptation of the insect without any complex controller. The antenna stimulation enables the robot to perform not only left/right turning but also backward walking and even cessation of walking. We were also able to grade the turning and backward walking speeds by changing the stimulation frequency. The power required to drive the robot is low as the power consumption of the antenna stimulation is in the order of hundreds of microwatts. In contrast to the traditional legged robots, this robot is of low cost, easy to construct, simple to control, and has ultralow power consumption.

  18. Functional results of robotic total intersphincteric resection with hand-sewn coloanal anastomosis.

    Science.gov (United States)

    Luca, F; Valvo, M; Guerra-Cogorno, M; Simo, D; Blesa-Sierra, E; Biffi, R; Garberoglio, C

    2016-06-01

    In recent decades there has been an increasing trend toward sphincter-preserving procedures for the treatment of low rectal cancer. Robotic surgery is considered to be particularly beneficial when operating in the deep pelvis, where laparoscopy presents technical limitations. The aim of this study was to prospectively evaluate the functional outcomes in patients affected by rectal cancer after robotic total intersphincteric resection (ISR) with hand-sewn coloanal anastomosis. From March 2008 to October 2012, 23 consecutive patients affected by distal rectal adenocarcinoma underwent robotic ISR. Operative, clinical, pathological and functional data regarding continence or presence of a low anterior resection syndrome (LARS) were prospectively collected in a database. Twenty-three consecutive patients were included in the study: 8 men and 15 women. The mean age was 60.2 years (range 28-73). Eighteen (78.3%) had neoadjuvant radiochemotherapy. Conversion rate was nil. The mean operative time was 296.01 min and the mean postoperative hospital stay was 7.43 ± 1.73 days. According to Kirwan's incontinence score, good fecal continence was shown in 85.7% of patients (Grade 1 and 2) and none required a colostomy (Grade 4). Concerning LARS score, the results were as follows: 57.1% patients had no LARS; 19% minor LARS and 23.8% major LARS. Robotic total ISR for low rectal cancer is an acceptable alternative to traditional procedures. Extensive discussion with the patient about the risk of poor functional outcomes or LARS syndrome is mandatory when considering an ISR for treatment of low rectal cancer. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Development of sensor system built into a robot hand toward environmental monitoring

    International Nuclear Information System (INIS)

    Kaneko, Kenji; Ueshiba, Toshio; Yoshimi, Takashi; Kawai, Yoshihiro; Morisawa, Mitsuharu; Kanehiro, Fumio; Yokoi, Kazuhito

    2015-01-01

    The development of sensor system that is built into a hand of a humanoid robot toward environmental monitoring is presented in this paper. The developed system consists of a color C-MOS camera, a laser projector with a lens distributing a laser light, and a LED projector. The sensor system can activate/disable these components according to the purpose. This paper introduces the design process, pre-experimental results for evaluating components, and the specifications of the developed sensor system together with experimental results. (author)

  20. Brain-Computer Interface application: auditory serial interface to control a two-class motor-imagery-based wheelchair.

    Science.gov (United States)

    Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier

    2017-05-30

    Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.

  1. Development of a robotic evaluation system for the ability of proprioceptive sensation in slow hand motion.

    Science.gov (United States)

    Tanaka, Yoshiyuki; Mizoe, Genki; Kawaguchi, Tomohiro

    2015-01-01

    This paper proposes a simple diagnostic methodology for checking the ability of proprioceptive/kinesthetic sensation by using a robotic device. The perception ability of virtual frictional forces is examined in operations of the robotic device by the hand at a uniform slow velocity along the virtual straight/circular path. Experimental results by healthy subjects demonstrate that percentage of correct answers for the designed perceptual tests changes in the motion direction as well as the arm configuration and the HFM (human force manipulability) measure. It can be supposed that the proposed methodology can be applied into the early detection of neuromuscular/neurological disorders.

  2. Presentation robot Advee

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, Stanislav; Hrbáček, J.; Ripel, T.; Ondroušek, V.; Hrbáček, R.; Schreiber, P.

    2012-01-01

    Roč. 18, 5/6 (2012), s. 307-322 ISSN 1802-1484 Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot * human - robot interface * localization Subject RIV: JD - Computer Applications, Robot ics

  3. Finger tips detection for two handed gesture recognition

    Science.gov (United States)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  4. Graphical interface between the CIRSSE testbed and CimStation software with MCS/CTOS

    Science.gov (United States)

    Hron, Anna B.

    1992-01-01

    This research is concerned with developing a graphical simulation of the testbed at the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) and the interface which allows for communication between the two. Such an interface is useful in telerobotic operations, and as a functional interaction tool for testbed users. Creating a simulated model of a real world system, generates inevitable calibration discrepancies between them. This thesis gives a brief overview of the work done to date in the area of workcell representation and communication, describes the development of the CIRSSE interface, and gives a direction for future work in the area of system calibration. The CimStation software used for development of this interface, is a highly versatile robotic workcell simulation package which has been programmed for this application with a scale graphical model of the testbed, and supporting interface menu code. A need for this tool has been identified for the reasons of path previewing, as a window on teleoperation and for calibration of simulated vs. real world models. The interface allows information (i.e., joint angles) generated by CimStation to be sent as motion goal positions to the testbed robots. An option of the interface has been established such that joint angle information generated by supporting testbed algorithms (i.e., TG, collision avoidance) can be piped through CimStation as a visual preview of the path.

  5. Air Muscle Actuated Low Cost Humanoid Hand

    Directory of Open Access Journals (Sweden)

    Peter Scarfe

    2008-11-01

    Full Text Available The control of humanoid robot hands has historically been expensive due to the cost of precision actuators. This paper presents the design and implementation of a low-cost air muscle actuated humanoid hand developed at Curtin University of Technology. This hand offers 10 individually controllable degrees of freedom ranging from the elbow to the fingers, with overall control handled through a computer GUI. The hand is actuated through 20 McKibben-style air muscles, each supplied by a pneumatic pressure-balancing valve that allows for proportional control to be achieved with simple and inexpensive components. The hand was successfully able to perform a number of human-equivalent tasks, such as grasping and relocating objects.

  6. Air Muscle Actuated Low Cost Humanoid Hand

    Directory of Open Access Journals (Sweden)

    Peter Scarfe

    2006-06-01

    Full Text Available The control of humanoid robot hands has historically been expensive due to the cost of precision actuators. This paper presents the design and implementation of a low-cost air muscle actuated humanoid hand developed at Curtin University of Technology. This hand offers 10 individually controllable degrees of freedom ranging from the elbow to the fingers, with overall control handled through a computer GUI. The hand is actuated through 20 McKibben-style air muscles, each supplied by a pneumatic pressure-balancing valve that allows for proportional control to be achieved with simple and inexpensive components. The hand was successfully able to perform a number of human-equivalent tasks, such as grasping and relocating objects.

  7. Visual exploration and analysis of human-robot interaction rules

    Science.gov (United States)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming

  8. Preliminary Findings of Feasibility of a Wearable Soft-robotic Glove Supporting Impaired Hand Function in Daily Life

    NARCIS (Netherlands)

    Radder, Bob; Radder, B.; Prange, Grada Berendina; Prange-Lasonder, G.B.; Kottink, A.I.R.; Gaasbeek, L.; Holmberg, J.; Meyer, T.; Buurke, Jaap; Rietman, Johan Swanik

    2016-01-01

    Elderly people frequently encounter difficulties in independently performing activities of daily living (ADL) due to a reduced hand function. Robotic assistive devices have the potential to provide the assistance that is necessary to perform ADL independently without the need of personal assistance.

  9. Integration of advanced teleoperation technologies for control of space robots

    Science.gov (United States)

    Stagnaro, Michael J.

    1993-01-01

    Teleoperated robots require one or more humans to control actuators, mechanisms, and other robot equipment given feedback from onboard sensors. To accomplish this task, the human or humans require some form of control station. Desirable features of such a control station include operation by a single human, comfort, and natural human interfaces (visual, audio, motion, tactile, etc.). These interfaces should work to maximize performance of the human/robot system by streamlining the link between human brain and robot equipment. This paper describes development of a control station testbed with the characteristics described above. Initially, this testbed will be used to control two teleoperated robots. Features of the robots include anthropomorphic mechanisms, slaving to the testbed, and delivery of sensory feedback to the testbed. The testbed will make use of technologies such as helmet mounted displays, voice recognition, and exoskeleton masters. It will allow tor integration and testing of emerging telepresence technologies along with techniques for coping with control link time delays. Systems developed from this testbed could be applied to ground control of space based robots. During man-tended operations, the Space Station Freedom may benefit from ground control of IVA or EVA robots with science or maintenance tasks. Planetary exploration may also find advanced teleoperation systems to be very useful.

  10. Optimal grasp planning for a dexterous robotic hand using the volume of a generalized force ellipsoid during accepted flattening

    Directory of Open Access Journals (Sweden)

    Peng Jia

    2017-01-01

    Full Text Available A grasp planning method based on the volume and flattening of a generalized force ellipsoid is proposed to improve the grasping ability of a dexterous robotic hand. First, according to the general solution of joint torques for a dexterous robotic hand, a grasping indicator for the dexterous hand—the maximum volume of a generalized external force ellipsoid and the minimum volume of a generalized contact internal force ellipsoid during accepted flattening—is proposed. Second, an optimal grasp planning method based on a task is established using the grasping indicator as an objective function. Finally, a simulation analysis and grasping experiment are performed. Results show that when the grasping experiment is conducted with the grasping configuration and positions of contact points optimized using the proposed grasping indicator, the root-mean-square values of the joint torques and contact internal forces of the dexterous hand are at a minimum. The effectiveness of the proposed grasping planning method is thus demonstrated.

  11. Physical interface dynamics alter how robotic exosuits augment human movement: implications for optimizing wearable assistive devices.

    Science.gov (United States)

    Yandell, Matthew B; Quinlivan, Brendan T; Popov, Dmitry; Walsh, Conor; Zelik, Karl E

    2017-05-18

    Wearable assistive devices have demonstrated the potential to improve mobility outcomes for individuals with disabilities, and to augment healthy human performance; however, these benefits depend on how effectively power is transmitted from the device to the human user. Quantifying and understanding this power transmission is challenging due to complex human-device interface dynamics that occur as biological tissues and physical interface materials deform and displace under load, absorbing and returning power. Here we introduce a new methodology for quickly estimating interface power dynamics during movement tasks using common motion capture and force measurements, and then apply this method to quantify how a soft robotic ankle exosuit interacts with and transfers power to the human body during walking. We partition exosuit end-effector power (i.e., power output from the device) into power that augments ankle plantarflexion (termed augmentation power) vs. power that goes into deformation and motion of interface materials and underlying soft tissues (termed interface power). We provide empirical evidence of how human-exosuit interfaces absorb and return energy, reshaping exosuit-to-human power flow and resulting in three key consequences: (i) During exosuit loading (as applied forces increased), about 55% of exosuit end-effector power was absorbed into the interfaces. (ii) However, during subsequent exosuit unloading (as applied forces decreased) most of the absorbed interface power was returned viscoelastically. Consequently, the majority (about 75%) of exosuit end-effector work over each stride contributed to augmenting ankle plantarflexion. (iii) Ankle augmentation power (and work) was delayed relative to exosuit end-effector power, due to these interface energy absorption and return dynamics. Our findings elucidate the complexities of human-exosuit interface dynamics during transmission of power from assistive devices to the human body, and provide insight into

  12. Virtual and Remote Robotic Laboratory Using EJS, MATLAB and LabVIEW

    Directory of Open Access Journals (Sweden)

    Jose Antonio Lopez-Orozco

    2013-02-01

    Full Text Available This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory or with a real robot (remote laboratory, with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.

  13. Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.

    Science.gov (United States)

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-02-21

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.

  14. Translational control of a graphically simulated robot arm by kinematic rate equations that overcome elbow joint singularity

    Science.gov (United States)

    Barker, L. K.; Houck, J. A.; Carzoo, S. W.

    1984-01-01

    An operator commands a robot hand to move in a certain direction relative to its own axis system by specifying a velocity in that direction. This velocity command is then resolved into individual joint rotational velocities in the robot arm to effect the motion. However, the usual resolved-rate equations become singular when the robot arm is straightened. To overcome this elbow joint singularity, equations were developed which allow continued translational control of the robot hand even though the robot arm is (or is nearly) fully extended. A feature of the equations near full arm extension is that an operator simply extends and retracts the robot arm to reverse the direction of the elbow bend (difficult maneuver for the usual resolved-rate equations). Results show successful movement of a graphically simulated robot arm.

  15. Brain Computer Interface for Micro-controller Driven Robot Based on Emotiv Sensors

    OpenAIRE

    Parth Gargava; Krishna Asawa

    2017-01-01

    A Brain Computer Interface (BCI) is developed to navigate a micro-controller based robot using Emotiv sensors. The BCI system has a pipeline of 5 stages- signal acquisition, pre-processing, feature extraction, classification and CUDA inter- facing. It shall aid in serving a prototype for physical movement of neurological patients who are unable to control or operate on their muscular movements. All stages of the pipeline are designed to process bodily actions like eye blinks to command naviga...

  16. User requirements for assistance of the supporting hand in bimanual daily activities via a robotic glove for severely affected stroke patients

    NARCIS (Netherlands)

    Prange, Grada Berendina; Smulders, Laura Cornelia; Smulders, L.C.; van Wijngaarden, J.; Lijbers, G.J.; Nijenhuis, Sharon Maria; Veltink, Petrus H.; Buurke, Jaap; Stienen, Arno; Braun, D.; Yu, H.; Campolo, D.

    2015-01-01

    For independent functioning in activities of daily life (ADL), proper hand function is paramount. Many stroke patients have a reduced ability to grasp and handle objects, while they don't fully recover functional use of the arm and hand, even after extensive (robotic) training. These patients may

  17. Nuclear robotics and remote handling at Harwell Laboratory

    International Nuclear Information System (INIS)

    Abel, E.; Brown, M.H.; Fischer, P.J.; Garlick, D.R.; Hanna, T.T.; Siva, K.V.

    1988-01-01

    After reviewing robotics technology and its possible application in nuclear remote handling systems of the future, six main research topics were identified where particular effort should be made. The Harwell Nuclear Robotics Programme is currently establishing sets of demonstration hardware which will allow generic research to be carried out on telerobotics, systems integration, the man machine interface, communications, servo systems and radiation tolerance. The objectives of the demonstrators are to allow validation of the techniques required for successful active facility applications such as decommissioning, decontamination, refurbishment, maintenance and repair, and to act as training aids to encourage plant designers and operators to adopt developments in new technology. (author)

  18. Central Vehicle Dynamics Control of the Robotic Research Platform ROboMObil

    OpenAIRE

    Bünte, Tilman; Ho, Lok Man; Satzger, Clemens; Brembeck, Jonathan

    2014-01-01

    The ROboMObil is DLR’s space-robotics driven by-wire electro-mobile research platform for mechatronic actuators, vehicle dynamics control, human machine interfaces, and autonomous driving (DLR = German Aerospace Center). Due to its four highly integrated identical Wheel Robots it exhibits an extraordinary manoeuvrability even allowing for driving sideward or rotating on the spot. Topics related to vehicle dynamics control are addressed in this article.

  19. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  20. Changes in skeletal muscle perfusion and spasticity in patients with poststroke hemiparesis treated by robotic assistance (Gloreha) of the hand.

    Science.gov (United States)

    Bissolotti, Luciano; Villafañe, Jorge Hugo; Gaffurini, Paolo; Orizio, Claudio; Valdes, Kristin; Negrini, Stefano

    2016-03-01

    [Purpose] The purpose of this case series was to determine the effects of robot-assisted hand rehabilitation with a Gloreha device on skeletal muscle perfusion, spasticity, and motor function in subjects with poststroke hemiparesis. [Subjects and Methods] Seven patients, 2 women and 5 men (mean ± SD age: 60.5 ±6.3 years), with hemiparesis (>6 months poststroke), received passive mobilization of the hand with a Gloreha (Idrogenet, Italy), device (30 min per day; 3 sessions a week for 3 weeks). The outcome measures were the total hemoglobin profiles and tissue oxygenation index (TOI) in the muscle tissue evaluated through near-infrared spectroscopy. The Motricity Index and modified Ashworth Scale for upper limb muscles were used to assess mobility of the upper extremity. [Results] Robotic assistance reduced spasticity after the intervention by 68.6% in the upper limb. The Motricity Index was unchanged in these patients after treatment. Regarding changes in muscle perfusion, significant improvements were found in total hemoglobin. There were significant differences between the pre- and posttreatment modified Ashworth scale. [Conclusion] The present work provides novel evidence that robotic assistance of the hand induced changes in local muscle blood flow and oxygen supply, diminished spasticity, and decreased subject-reported symptoms of heaviness and stiffness in subjects with post-stroke hemiparesis.

  1. Comparison of three-dimensional, assist-as-needed robotic arm/hand movement training provided with Pneu-WREX to conventional tabletop therapy after chronic stroke.

    Science.gov (United States)

    Reinkensmeyer, David J; Wolbrecht, Eric T; Chan, Vicky; Chou, Cathy; Cramer, Steven C; Bobrow, James E

    2012-11-01

    Robot-assisted movement training can help individuals with stroke reduce arm and hand impairment, but robot therapy is typically only about as effective as conventional therapy. Refining the way that robots assist during training may make them more effective than conventional therapy. Here, the authors measured the therapeutic effect of a robot that required individuals with a stroke to achieve virtual tasks in three dimensions against gravity. The robot continuously estimated how much assistance patients needed to perform the tasks and provided slightly less assistance than needed to reduce patient slacking. Individuals with a chronic stroke (n = 26; baseline upper limb Fugl-Meyer score, 23 ± 8) were randomized into two groups and underwent 24 one-hour training sessions over 2 mos. One group received the assist-as-needed robot training and the other received conventional tabletop therapy with the supervision of a physical therapist. Training helped both groups significantly reduce their motor impairment, as measured by the primary outcome measure, the Fugl-Meyer score, but the improvement was small (3.0 ± 4.9 points for robot therapy vs. 0.9 ± 1.7 for conventional therapy). There was a trend for greater reduction for the robot-trained group (P = 0.07). The robot group largely sustained this gain at the 3-mo follow-up. The robot-trained group also experienced significant improvements in Box and Blocks score and hand grip strength, whereas the control group did not, but these improvements were not sustained at follow-up. In addition, the robot-trained group showed a trend toward greater improvement in sensory function, as measured by the Nottingham Sensory Test (P = 0.06). These results suggest that in patients with chronic stroke and moderate-severe deficits, assisting in three-dimensional virtual tasks with an assist-as-needed controller may make robotic training more effective than conventional tabletop training.

  2. A Mobile Application That Allows Children in the Early Childhood to Program Robots

    Directory of Open Access Journals (Sweden)

    Kryscia Ramírez-Benavides

    2016-01-01

    Full Text Available Children born in the Information Age are digital natives; this characteristic should be exploited to improve the learning process through the use of technology. This paper addresses the design, construction, and evaluation process of TITIBOTS, a programming assistance tool for mobile devices that allows children in the early childhood to create programs and execute them using robots. We present the results of using TITIBOTS in different scenarios with children between 4 and 6 years old. The insight obtained in the development and evaluation of the tool could be useful when creating applications for children in the early childhood. The results were promising; children liked the application and were willing to continue using it to program robots to solve specific tasks, developing the skills of the 21st century.

  3. Service robotics: an emergent technology field at the interface between industry and services.

    Science.gov (United States)

    Ott, Ingrid

    2012-12-01

    The paper at hand analyzes the economic implications of service robots as expected important future technology. The considerations are embedded into global trends, focusing on the interdependencies between services and industry not only in the context of the provision of services but already starting at the level of the innovation process. It is argued that due to the various interdependencies combined with heterogenous application fields, the resulting implications need to be contextualized. Concerning the net labor market effects, it is reasonable to assume that the field of service robotics will generate overall job creation that goes along with increasing skill requirements demanded from involved employees. It is analyzed which challenges arise in evaluating and further developing the new technology field and some policy recommendations are given.

  4. A new robotic-assisted flexible endoscope with single-hand control: endoscopic submucosal dissection in the ex vivo porcine stomach.

    Science.gov (United States)

    Iwasa, Tsutomu; Nakadate, Ryu; Onogi, Shinya; Okamoto, Yasuharu; Arata, Jumpei; Oguri, Susumu; Ogino, Haruei; Ihara, Eikichi; Ohuchida, Kenoki; Akahoshi, Tomohiko; Ikeda, Tetsuo; Ogawa, Yoshihiro; Hashizume, Makoto

    2018-04-17

    Difficulties in endoscopic operations and therapeutic procedures seem to occur due to the complexity of operating the endoscope dial as well as difficulty in performing synchronized movements with both hands. We developed a prototype robotic-assisted flexible endoscope that can be controlled with a single hand in order to simplify the operation of the endoscope. The aim of this study was to confirm the operability of the robotic-assisted flexible endoscope (RAFE) by performing endoscopic submucosal dissection (ESD). Study 1: ESD was performed manually or with RAFE by an expert endoscopist in ex vivo porcine stomachs; six operations manually and six were performed with RAFE. The procedure time per unit circumferential length/area was calculated, and the results were statistically analyzed. Study 2: We evaluated how smoothly a non-endoscopist can move a RAFE compared to a manual endoscope by assessing the designated movement of the endoscope. Study 1: En bloc resection was achieved by ESD using the RAFE. The procedure time was gradually shortened with increasing experience, and the procedure time of ESD performed with the RAFE was not significantly different from that of ESD performed with a manual endoscope. Study 2: The time for the designated movement of the endoscope was significantly shorter with a RAFE than that with a manual endoscope as for a non-endoscopist. The RAFE that we developed enabled an expert endoscopist to perform the ESD procedure without any problems and allowed a non-endoscopist to control the endoscope more easily and quickly than a manual endoscope. The RAFE is expected to undergo further development.

  5. Teleoperation of Robonaut Using Finger Tracking

    Science.gov (United States)

    Champoux, Rachel G.; Luo, Victor

    2012-01-01

    With the advent of new finger tracking systems, the idea of a more expressive and intuitive user interface is being explored and implemented. One practical application for this new kind of interface is that of teleoperating a robot. For humanoid robots, a finger tracking interface is required due to the level of complexity in a human-like hand, where a joystick isn't accurate. Moreover, for some tasks, using one's own hands allows the user to communicate their intentions more effectively than other input. The purpose of this project was to develop a natural user interface for someone to teleoperate a robot that is elsewhere. Specifically, this was designed to control Robonaut on the international space station to do tasks too dangerous and/or too trivial for human astronauts. This interface was developed by integrating and modifying 3Gear's software, which includes a library of gestures and the ability to track hands. The end result is an interface in which the user can manipulate objects in real time in the user interface. then, the information is relayed to a simulator, the stand in for Robonaut, at a slight delay.

  6. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

    Science.gov (United States)

    Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika

    2017-06-01

    Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the

  7. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  8. Assessment of Laparoscopic Skills Performance: 2D Versus 3D Vision and Classic Instrument Versus New Hand-Held Robotic Device for Laparoscopy.

    Science.gov (United States)

    Leite, Mariana; Carvalho, Ana F; Costa, Patrício; Pereira, Ricardo; Moreira, Antonio; Rodrigues, Nuno; Laureano, Sara; Correia-Pinto, Jorge; Vilaça, João L; Leão, Pedro

    2016-02-01

    Laparoscopic surgery has undeniable advantages, such as reduced postoperative pain, smaller incisions, and faster recovery. However, to improve surgeons' performance, ergonomic adaptations of the laparoscopic instruments and introduction of robotic technology are needed. The aim of this study was to ascertain the influence of a new hand-held robotic device for laparoscopy (HHRDL) and 3D vision on laparoscopic skills performance of 2 different groups, naïve and expert. Each participant performed 3 laparoscopic tasks-Peg transfer, Wire chaser, Knot-in 4 different ways. With random sequencing we assigned the execution order of the tasks based on the first type of visualization and laparoscopic instrument. Time to complete each laparoscopic task was recorded and analyzed with one-way analysis of variance. Eleven experts and 15 naïve participants were included. Three-dimensional video helps the naïve group to get better performance in Peg transfer, Wire chaser 2 hands, and Knot; the new device improved the execution of all laparoscopic tasks (P < .05). For expert group, the 3D video system benefited them in Peg transfer and Wire chaser 1 hand, and the robotic device in Peg transfer, Wire chaser 1 hand, and Wire chaser 2 hands (P < .05). The HHRDL helps the execution of difficult laparoscopic tasks, such as Knot, in the naïve group. Three-dimensional vision makes the laparoscopic performance of the participants without laparoscopic experience easier, unlike those with experience in laparoscopic procedures. © The Author(s) 2015.

  9. Effect of clinical parameters on the control of myoelectric robotic prosthetic hands.

    Science.gov (United States)

    Atzori, Manfredo; Gijsberts, Arjan; Castellini, Claudio; Caputo, Barbara; Hager, Anne-Gabrielle Mittaz; Elsig, Simone; Giatsidis, Giorgio; Bassetto, Franco; Müller, Henning

    2016-01-01

    Improving the functionality of prosthetic hands with noninvasive techniques is still a challenge. Surface electromyography (sEMG) currently gives limited control capabilities; however, the application of machine learning to the analysis of sEMG signals is promising and has recently been applied in practice, but many questions still remain. In this study, we recorded the sEMG activity of the forearm of 11 male subjects with transradial amputation who were mentally performing 40 hand and wrist movements. The classification performance and the number of independent movements (defined as the subset of movements that could be distinguished with >90% accuracy) were studied in relationship to clinical parameters related to the amputation. The analysis showed that classification accuracy and the number of independent movements increased significantly with phantom limb sensation intensity, remaining forearm percentage, and temporal distance to the amputation. The classification results suggest the possibility of naturally controlling up to 11 movements of a robotic prosthetic hand with almost no training. Knowledge of the relationship between classification accuracy and clinical parameters adds new information regarding the nature of phantom limb pain as well as other clinical parameters, and it can lay the foundations for future "functional amputation" procedures in surgery.

  10. Advanced mechanics in robotic systems

    CERN Document Server

    Nava Rodríguez, Nestor Eduardo

    2011-01-01

    Illustrates original and ambitious mechanical designs and techniques for the development of new robot prototypes Includes numerous figures, tables and flow charts Discusses relevant applications in robotics fields such as humanoid robots, robotic hands, mobile robots, parallel manipulators and human-centred robots

  11. Performance Comparison Between FEDERICA Hand and LARM Hand

    OpenAIRE

    Carbone, Giuseppe; Rossi, Cesare; Savino, Sergio

    2015-01-01

    This paper describes two robotic hands that have been\\ud developed at University Federico II of Naples and at the\\ud University of Cassino. FEDERICA Hand and LARM Hand\\ud are described in terms of design and operational features.\\ud In particular, careful attention is paid to the differences\\ud between the above-mentioned hands in terms of transmission\\ud systems. FEDERICA Hand uses tendons and pulleys\\ud to drive phalanxes, while LARM Hand uses cross four-bar\\ud linkages. Results of experime...

  12. Ocular interaction with robots: an aid to the disabled

    International Nuclear Information System (INIS)

    Azorin, J.M.; Ianez, E.; Fernandez Jover, E.; Sabater, J.M.

    2010-01-01

    This paper describes a technique to control remotely a robot arm from his eyes movement. This method will help disabled people to control a robot in order to aid them to perform tasks in their daily lives. The electrooculography technique (EOG) is used to detect the eyes movement. EOG registers the potential difference between the cornea and the retina using electrodes. The eyes movement is used to control a remote robot arm of 6 degrees of freedom. First, the paper introduces several eye movement techniques to interact with devices, focusing on the EOG one. Then, the paper describes the system that allows interacting with a robot through the eyes movement. Finally, the paper shows some experimental results related to the robot controlled by the EOG-based interface. (Author).

  13. Fiscal 1998 R and D report on the human coordination/coexistence robot system (development of practical technology for rational energy use); 1998 nendo ningen kyocho kyozongata robot system no kenkyu kaihatsu (energy shiyo gorika kankei gijutsu jitsuyoka kaihatsu) seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    This report reports the R and D on the human coordination/coexistence robot system possible to perform various works instead of people. As for an intelligent robot hand, 4-finger hand and arm hardware were developed and fabricated, and operation of the assembled system of them was tested. As for a robot platform, a remote control platform, and the interface specifications of command communication and data communication were studied. As for 3-D walk adaptive to land shapes, an analysis environment for a control algorithm and an easy-to-use environment for a virtual platform simulator were prepared. By using this analysis environment, the central part of the walk control algorithm, and a module for forming walk patterns were developed. In the application research on energy saving, various problems on dangerous and harsh conditions in construction and disaster restoration works were analyzed, and the needs of a humanoid robot for such works were studied. (NEDO)

  14. Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot.

    Science.gov (United States)

    Greer, Joseph D; Morimoto, Tania K; Okamura, Allison M; Hawkes, Elliot W

    2017-01-01

    We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot's pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds.

  15. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    Directory of Open Access Journals (Sweden)

    Lobo Pereira Fernando

    2010-02-01

    Full Text Available Abstract Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous. The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI. Methods In this paper, a sequential Extended Kalman Filter (EKF feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how

  16. RoboSmith: Wireless Networked Architecture for Multiagent Robotic System

    Directory of Open Access Journals (Sweden)

    Florin Moldoveanu

    2010-11-01

    Full Text Available In this paper is presented an architecture for a flexible mini robot for a multiagent robotic system. In a multiagent system the value of an individual agent is negligible since the goal of the system is essential. Thus, the agents (robots need to be small, low cost and cooperative. RoboSmith are designed based on these conditions. The proposed architecture divide a robot into functional modules such as locomotion, control, sensors, communication, and actuation. Any mobile robot can be constructed by combining these functional modules for a specific application. An embedded software with dynamic task uploading and multi-tasking abilities is developed in order to create better interface between robots and the command center and among the robots. The dynamic task uploading allows the robots change their behaviors in runtime. The flexibility of the robots is given by facts that the robots can work in multiagent system, as master-slave, or hybrid mode, can be equipped with different modules and possibly be used in other applications such as mobile sensor networks remote sensing, and plant monitoring.

  17. Radical nephrectomy performed by open, laparoscopy with or without hand-assistance or robotic methods by the same surgeon produces comparable perioperative results

    Directory of Open Access Journals (Sweden)

    Tanya Nazemi

    2006-02-01

    Full Text Available PURPOSE: Radical nephrectomy can be performed using open or laparoscopic (with or without hand assistance methods, and most recently using the da Vinci Surgical Robotic System. We evaluated the perioperative outcomes using a contemporary cohort of patients undergoing radical nephrectomy by one of the above 4 methods performed by the same surgeon. MATERIALS AND METHODS: The relevant clinical information on 57 consecutive patients undergoing radical nephrectomy from September 2000 until July 2004 by a single surgeon was entered in a Microsoft Access DatabaseTM and queried. Following appropriate statistical analysis, p values < 0.05 were considered significant. RESULTS: Of 57 patients, the open, robotic, laparoscopy with or without hand assistance radical nephrectomy were performed in 18, 6, 21, and 12 patients, respectively. The age, sex, body mass index (BMI, incidence of malignancy, specimen and tumor size, tumor stage, Fuhrman grade, hospital stay, change in postoperative creatinine, drop in hemoglobin, and perioperative complications were not significantly different between the methods. While the estimated median blood loss, postoperative narcotic use for pain control, and hospital stay were significantly higher in the open surgery method (p < 0.05, the median operative time was significantly shorter compared to the robotic method (p = 0.02. Operating room costs were significantly higher in the robotic and laparoscopic groups; however, there was no significant difference in total hospital costs between the 4 groups. CONCLUSIONS: The study demonstrates that radical nephrectomy can be safely performed either by open, robotic, or laparoscopic with or without hand assistance methods without significant difference in perioperative complication rates. A larger cohort and longer follow up are needed to validate our findings and establish oncological outcomes.

  18. A Case-Study for Life-Long Learning and Adaptation in Cooperative Robot Teams

    International Nuclear Information System (INIS)

    Parker, L.E.

    1999-01-01

    While considerable progress has been made in recent years toward the development of multi-robot teams, much work remains to be done before these teams are used widely in real-world applications. Two particular needs toward this end are the development of mechanisms that enable robot teams to generate cooperative behaviors on their own, and the development of techniques that allow these teams to autonomously adapt their behavior over time as the environment or the robot team changes. This paper proposes the use of the Cooperative Multi-Robot Observation of Multiple Moving Targets (CMOMMT) application as a rich domain for studying the issues of multi-robot learning and adaptation. After discussing the need for learning and adaptation in multi-robot teams, this paper describes the CMOMMT application and its relevance to multi-robot learning. We discuss the results of the previously- developed, hand-generated algorithm for CMOMMT and the potential for learning that was discovered from the hand-generated approach. We then describe the early work that has been done (by us and others) to generate multi- robot learning techniques for the CMOMMT application, as well as our ongoing research to develop approaches that give performance as good, or better, than the hand-generated approach. The ultimate goal of this research is to develop techniques for multi-robot learning and adaptation in the CMOMMT application domain that will generalize to cooperative robot applications in other domains, thus making the practical use of multi-robot teams in a wide variety of real-world applications much closer to reality

  19. fMRI-compatible rehabilitation hand device

    Directory of Open Access Journals (Sweden)

    Tzika Aria

    2006-10-01

    Full Text Available Abstract Background Functional magnetic resonance imaging (fMRI has been widely used in studying human brain functions and neurorehabilitation. In order to develop complex and well-controlled fMRI paradigms, interfaces that can precisely control and measure output force and kinematics of the movements in human subjects are needed. Optimized state-of-the-art fMRI methods, combined with magnetic resonance (MR compatible robotic devices for rehabilitation, can assist therapists to quantify, monitor, and improve physical rehabilitation. To achieve this goal, robotic or mechatronic devices with actuators and sensors need to be introduced into an MR environment. The common standard mechanical parts can not be used in MR environment and MR compatibility has been a tough hurdle for device developers. Methods This paper presents the design, fabrication and preliminary testing of a novel, one degree of freedom, MR compatible, computer controlled, variable resistance hand device that may be used in brain MR imaging during hand grip rehabilitation. We named the device MR_CHIROD (Magnetic Resonance Compatible Smart Hand Interfaced Rehabilitation Device. A novel feature of the device is the use of Electro-Rheological Fluids (ERFs to achieve tunable and controllable resistive force generation. ERFs are fluids that experience dramatic changes in rheological properties, such as viscosity or yield stress, in the presence of an electric field. The device consists of four major subsystems: a an ERF based resistive element; b a gearbox; c two handles and d two sensors, one optical encoder and one force sensor, to measure the patient induced motion and force. The smart hand device is designed to resist up to 50% of the maximum level of gripping force of a human hand and be controlled in real time. Results Laboratory tests of the device indicate that it was able to meet its design objective to resist up to approximately 50% of the maximum handgrip force. The detailed

  20. MARS: An Educational Environment for Multiagent Robot Simulations

    Directory of Open Access Journals (Sweden)

    Marco Casini

    2016-01-01

    Full Text Available Undergraduate robotics students often find it difficult to design and validate control algorithms for teams of mobile robots. This is mainly due to two reasons. First, very rarely, educational laboratories are equipped with large teams of robots, which are usually expensive, bulky, and difficult to manage and maintain. Second, robotics simulators often require students to spend much time to learn their use and functionalities. For this purpose, a simulator of multiagent mobile robots named MARS has been developed within the Matlab environment, with the aim of helping students to simulate a wide variety of control algorithms in an easy way and without spending time for understanding a new language. Through this facility, the user is able to simulate multirobot teams performing different tasks, from cooperative to competitive ones, by using both centralized and distributed controllers. Virtual sensors are provided to simulate real devices. A graphical user interface allows students to monitor the robots behaviour through an online animation.

  1. On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface.

    Science.gov (United States)

    Lopes, Daniel Simões; Parreira, Pedro Duarte de Figueiredo; Paulo, Soraia Figueiredo; Nunes, Vitor; Rego, Paulo Amaral; Neves, Manuel Cassiano; Rodrigues, Pedro Silva; Jorge, Joaquim Armando

    2017-08-01

    Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Foot-controlled robotic-enabled endoscope holder for endoscopic sinus surgery: A cadaveric feasibility study.

    Science.gov (United States)

    Chan, Jason Y K; Leung, Iris; Navarro-Alarcon, David; Lin, Weiyang; Li, Peng; Lee, Dennis L Y; Liu, Yun-hui; Tong, Michael C F

    2016-03-01

    To evaluate the feasibility of a unique prototype foot-controlled robotic-enabled endoscope holder (FREE) in functional endoscopic sinus surgery. Cadaveric study. Using human cadavers, we investigated the feasibility, advantages, and disadvantages of the robotic endoscope holder in performing endoscopic sinus surgery with two hands in five cadaver heads, mimicking a single nostril three-handed technique. The FREE robot is relatively easy to use. Setup was quick, taking less than 3 minutes from docking the robot at the head of the bed to visualizing the middle meatus. The unit is also relatively small, takes up little space, and currently has four degrees of freedom. The learning curve for using the foot control was short. The use of both hands was not hindered by the presence of the endoscope in the nasal cavity. The tremor filtration also aided in the smooth movement of the endoscope, with minimal collisions. The FREE endoscope holder in an ex-vivo cadaver test corroborated the feasibility of the robotic prototype, which allows for a two-handed approach to surgery equal to a single nostril three-handed technique without the holder that may reduce operating time. Further studies will be needed to evaluate its safety profile and use in other areas of endoscopic surgery. NA. Laryngoscope, 126:566-569, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Robotically facilitated virtual rehabilitation of arm transport integrated with finger movement in persons with hemiparesis.

    Science.gov (United States)

    Merians, Alma S; Fluet, Gerard G; Qiu, Qinyin; Saleh, Soha; Lafond, Ian; Davidow, Amy; Adamovich, Sergei V

    2011-05-16

    Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training.

  4. Interactive multi-objective path planning through a palette-based user interface

    Science.gov (United States)

    Shaikh, Meher T.; Goodrich, Michael A.; Yi, Daqing; Hoehne, Joseph

    2016-05-01

    n a problem where a human uses supervisory control to manage robot path-planning, there are times when human does the path planning, and if satisfied commits those paths to be executed by the robot, and the robot executes that plan. In planning a path, the robot often uses an optimization algorithm that maximizes or minimizes an objective. When a human is assigned the task of path planning for robot, the human may care about multiple objectives. This work proposes a graphical user interface (GUI) designed for interactive robot path-planning when an operator may prefer one objective over others or care about how multiple objectives are traded off. The GUI represents multiple objectives using the metaphor of an artist's palette. A distinct color is used to represent each objective, and tradeoffs among objectives are balanced in a manner that an artist mixes colors to get the desired shade of color. Thus, human intent is analogous to the artist's shade of color. We call the GUI an "Adverb Palette" where the word "Adverb" represents a specific type of objective for the path, such as the adverbs "quickly" and "safely" in the commands: "travel the path quickly", "make the journey safely". The novel interactive interface provides the user an opportunity to evaluate various alternatives (that tradeoff between different objectives) by allowing her to visualize the instantaneous outcomes that result from her actions on the interface. In addition to assisting analysis of various solutions given by an optimization algorithm, the palette has additional feature of allowing the user to define and visualize her own paths, by means of waypoints (guiding locations) thereby spanning variety for planning. The goal of the Adverb Palette is thus to provide a way for the user and robot to find an acceptable solution even though they use very different representations of the problem. Subjective evaluations suggest that even non-experts in robotics can carry out the planning tasks with a

  5. Interactive robot control system and method of use

    Science.gov (United States)

    Sanders, Adam M. (Inventor); Reiland, Matthew J. (Inventor); Abdallah, Muhammad E. (Inventor); Linn, Douglas Martin (Inventor); Platt, Robert (Inventor)

    2012-01-01

    A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.

  6. Magnetic resonance-compatible robotic and mechatronics systems for image-guided interventions and rehabilitation: a review study.

    Science.gov (United States)

    Tsekos, Nikolaos V; Khanicheh, Azadeh; Christoforou, Eftychios; Mavroidis, Constantinos

    2007-01-01

    The continuous technological progress of magnetic resonance imaging (MRI), as well as its widespread clinical use as a highly sensitive tool in diagnostics and advanced brain research, has brought a high demand for the development of magnetic resonance (MR)-compatible robotic/mechatronic systems. Revolutionary robots guided by real-time three-dimensional (3-D)-MRI allow reliable and precise minimally invasive interventions with relatively short recovery times. Dedicated robotic interfaces used in conjunction with fMRI allow neuroscientists to investigate the brain mechanisms of manipulation and motor learning, as well as to improve rehabilitation therapies. This paper gives an overview of the motivation, advantages, technical challenges, and existing prototypes for MR-compatible robotic/mechatronic devices.

  7. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    Science.gov (United States)

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.

  8. Software design of the hybrid robot machine for ITER vacuum vessel assembly and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ming, E-mail: Ming.Li@lut.fi [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland); Yang, Guangyou [School of Mechanical Engineering, Hubei University of Technology, Wuhan (China)

    2013-10-15

    A specific software design is elaborated in this paper for the hybrid robot machine used for the ITER vacuum vessel (VV) assembly and maintenance. In order to provide the multi-machining-function as well as the complicated, flexible and customizable GUI designing satisfying the non-standardized VV assembly process in one hand, and in another hand guarantee the stringent machining precision in the real-time motion control of robot machine, a client–server-control software architecture is proposed, which separates the user interaction, data communication and robot control implementation into different software layers. Correspondingly, three particular application protocols upon the TCP/IP are designed to transmit the data, command and status between the client and the server so as to deal with the abundant data streaming in the software. In order not to be affected by the graphic user interface (GUI) modification process in the future experiment in VV assembly working field, the real-time control system is realized as a stand-alone module in the architecture to guarantee the controlling performance of the robot machine. After completing the software development, a milling operation is tested on the robot machine, and the result demonstrates that both the specific GUI operability and the real-time motion control performance could be guaranteed adequately in the software design.

  9. Software design of the hybrid robot machine for ITER vacuum vessel assembly and maintenance

    International Nuclear Information System (INIS)

    Li, Ming; Wu, Huapeng; Handroos, Heikki; Yang, Guangyou

    2013-01-01

    A specific software design is elaborated in this paper for the hybrid robot machine used for the ITER vacuum vessel (VV) assembly and maintenance. In order to provide the multi-machining-function as well as the complicated, flexible and customizable GUI designing satisfying the non-standardized VV assembly process in one hand, and in another hand guarantee the stringent machining precision in the real-time motion control of robot machine, a client–server-control software architecture is proposed, which separates the user interaction, data communication and robot control implementation into different software layers. Correspondingly, three particular application protocols upon the TCP/IP are designed to transmit the data, command and status between the client and the server so as to deal with the abundant data streaming in the software. In order not to be affected by the graphic user interface (GUI) modification process in the future experiment in VV assembly working field, the real-time control system is realized as a stand-alone module in the architecture to guarantee the controlling performance of the robot machine. After completing the software development, a milling operation is tested on the robot machine, and the result demonstrates that both the specific GUI operability and the real-time motion control performance could be guaranteed adequately in the software design

  10. Experimental evaluation of magnified haptic feedback for robot-assisted needle insertion and palpation.

    Science.gov (United States)

    Meli, Leonardo; Pacchierotti, Claudio; Prattichizzo, Domenico

    2017-12-01

    Haptic feedback has been proven to play a key role in enhancing the performance of teleoperated medical procedures. However, due to safety issues, commercially-available medical robots do not currently provide the clinician with haptic feedback. This work presents the experimental evaluation of a teleoperation system for robot-assisted medical procedures able to provide magnified haptic feedback to the clinician. Forces registered at the operating table are magnified and provided to the clinician through a 7-DoF haptic interface. The same interface is also used to control the motion of a 6-DoF slave robotic manipulator. The safety of the system is guaranteed by a time-domain passivity-based control algorithm. Two experiments were carried out on stiffness discrimination (during palpation and needle insertion) and one experiment on needle guidance. Our haptic-enabled teleoperation system improved the performance with respect to direct hand interaction of 80%, 306%, and 27% in stiffness discrimination through palpation, stiffness discrimination during needle insertion, and guidance, respectively. Copyright © 2017 John Wiley & Sons, Ltd.

  11. How to get the best from robotic thoracic surgery.

    Science.gov (United States)

    Ricciardi, Sara; Zirafa, Carmelina Cristina; Davini, Federico; Melfi, Franca

    2018-04-01

    The application of Robotic technology in thoracic surgery has become widespread in the last decades. Thanks to its advanced features, the robotic system allows to perform a broad range of complex operations safely and in a comfortable way, with valuable advantages related to low invasiveness. Regarding lung tumours, several studies have shown the benefits of robotic surgery including lower blood loss and improved lymph node removal when compared with other minimally invasive techniques. Moreover, the robotic instruments allow to reach deep and narrow spaces permitting safe and precise removal of tumours located in remote areas, such as retrosternal and posterior mediastinal spaces with outstanding postoperative and oncological results. One controversial finding about the application of robotic system is its high capital and running costs. For this reason, a limited number of centres worldwide are able to employ this groundbreaking technology and there are limited possibilities for the trainees to acquire the necessary skills in robotic surgery. Therefore, a training programme based on three steps of learning, associated with a solid surgical background and a consistent operating activity, are required to obtain effective results. Putting this highest technological innovation in the hand of expert surgeons we can assure safe and effective procedures getting the best from robotic thoracic surgery.

  12. Robotic Ankle for Omnidirectional Rock Anchors

    Science.gov (United States)

    Parness, Aaron; Frost, Matthew; Thatte, Nitish

    2013-01-01

    Future robotic exploration of near-Earth asteroids and the vertical and inverted rock walls of lava caves and cliff faces on Mars and other planetary bodies would require a method of gripping their rocky surfaces to allow mobility without gravitational assistance. In order to successfully navigate this terrain and drill for samples, the grippers must be able to produce anchoring forces in excess of 100 N. Additionally, the grippers must be able to support the inertial forces of a moving robot, as well gravitational forces for demonstrations on Earth. One possible solution would be to use microspine arrays to anchor to rock surfaces and provide the necessary load-bearing abilities for robotic exploration of asteroids. Microspine arrays comprise dozens of small steel hooks supported on individual suspensions. When these arrays are dragged along a rock surface, the steel hooks engage with asperities and holes on the surface. The suspensions allow for individual hooks to engage with asperities while the remaining hooks continue to drag along the surface. This ensures that the maximum possible number of hooks engage with the surface, thereby increasing the load-bearing abilities of the gripper. Using the microspine array grippers described above as the end-effectors of a robot would allow it to traverse terrain previously unreachable by traditional wheeled robots. Furthermore, microspine-gripping robots that can perch on cliffs or rocky walls could enable a new class of persistent surveillance devices for military applications. In order to interface these microspine grippers with a legged robot, an ankle is needed that can robotically actuate the gripper, as well as allow it to conform to the large-scale irregularities in the rock. The anchor serves three main purposes: deploy and release the anchor, conform to roughness or misalignment with the surface, and cancel out any moments about the anchor that could cause unintentional detachment. The ankle design contains a

  13. 'Robot' Hand Illusion under Delayed Visual Feedback: Relationship between the Senses of Ownership and Agency.

    Directory of Open Access Journals (Sweden)

    Mohamad Arif Fahmi Ismail

    Full Text Available The rubber hand illusion (RHI is an illusion of the self-ownership of a rubber hand that is touched synchronously with one's own hand. While the RHI relates to visual and tactile integration, we can also consider a similar illusion with visual and motor integration on a fake hand. We call this a "robot hand illusion" (RoHI, which relates to both the senses of ownership and agency. Here we investigate the effect of delayed visual feedback on the RoHI. Participants viewed a virtual computer graphic hand controlled by their hand movement recorded using a data glove device. We inserted delays of various lengths between the participant's hand and the virtual hand movements (90-590 ms, and the RoHI effects for each delay condition were systematically tested using a questionnaire. The results showed that the participants felt significantly greater RoHI effects with temporal discrepancies of less than 190 ms compared with longer temporal discrepancies, both in the senses of ownership and agency. Additionally, participants felt significant, but weaker, RoHI effects with temporal discrepancies of 290-490 ms in the sense of agency, but not in the sense of ownership. The participants did not feel a RoHI with temporal discrepancies of 590 ms in either the senses of agency or ownership. Our results suggest that a time window of less than 200 ms is critical for multi-sensory integration processes constituting self-body image.

  14. 'Robot' Hand Illusion under Delayed Visual Feedback: Relationship between the Senses of Ownership and Agency.

    Science.gov (United States)

    Ismail, Mohamad Arif Fahmi; Shimada, Sotaro

    2016-01-01

    The rubber hand illusion (RHI) is an illusion of the self-ownership of a rubber hand that is touched synchronously with one's own hand. While the RHI relates to visual and tactile integration, we can also consider a similar illusion with visual and motor integration on a fake hand. We call this a "robot hand illusion" (RoHI), which relates to both the senses of ownership and agency. Here we investigate the effect of delayed visual feedback on the RoHI. Participants viewed a virtual computer graphic hand controlled by their hand movement recorded using a data glove device. We inserted delays of various lengths between the participant's hand and the virtual hand movements (90-590 ms), and the RoHI effects for each delay condition were systematically tested using a questionnaire. The results showed that the participants felt significantly greater RoHI effects with temporal discrepancies of less than 190 ms compared with longer temporal discrepancies, both in the senses of ownership and agency. Additionally, participants felt significant, but weaker, RoHI effects with temporal discrepancies of 290-490 ms in the sense of agency, but not in the sense of ownership. The participants did not feel a RoHI with temporal discrepancies of 590 ms in either the senses of agency or ownership. Our results suggest that a time window of less than 200 ms is critical for multi-sensory integration processes constituting self-body image.

  15. Emotion based human-robot interaction

    Directory of Open Access Journals (Sweden)

    Berns Karsten

    2018-01-01

    Full Text Available Human-machine interaction is a major challenge in the development of complex humanoid robots. In addition to verbal communication the use of non-verbal cues such as hand, arm and body gestures or mimics can improve the understanding of the intention of the robot. On the other hand, by perceiving such mechanisms of a human in a typical interaction scenario the humanoid robot can adapt its interaction skills in a better way. In this work, the perception system of two social robots, ROMAN and ROBIN of the RRLAB of the TU Kaiserslautern, is presented in the range of human-robot interaction.

  16. Filigree Robotics: Biennalen for Kunsthåndværk & Design

    DEFF Research Database (Denmark)

    2017-01-01

    and Architecture(CITA), School of Architecture that unfold how advances in 3d motion capture technology, digital scanning technology and 3d printing in clay create new interfaces and processes between human, space and material. Thus Filigree Robotics is situated in a context that allow the combination...

  17. Robots take a hand in inspection, maintenance and repair

    International Nuclear Information System (INIS)

    Cruickshank, A.

    1985-01-01

    In the search for better economic performance through higher availability, utilities are beginning to look with interest at the uses of robotics. However, while some routine surveillance jobs may be amenable to existing commercial robot technology, most maintenance and repair tasks are not. A lot of work still needs to be done to develop robotic devices that can be employed effectively in the sometimes congested and inaccessible environments inside containments. (author)

  18. Robots take a hand in inspection, maintenance and repair

    Energy Technology Data Exchange (ETDEWEB)

    Cruickshank, A

    1985-04-01

    In the search for better economic performance through higher availability, utilities are beginning to look with interest at the uses of robotics. However, while some routine surveillance jobs may be amenable to existing commercial robot technology, most maintenance and repair tasks are not. A lot of work still needs to be done to develop robotic devices that can be employed effectively in the sometimes congested and inaccessible environments inside containments.

  19. Gestures in an Intelligent User Interface

    Science.gov (United States)

    Fikkert, Wim; van der Vet, Paul; Nijholt, Anton

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user's perspective. Over the course of two sequential user evaluations, we defined a simple gesture set that allows users to fully control a large display multimedia interface, intuitively. First, we evaluated numerous gesture possibilities for a set of commands that can be issued to the interface. These gestures were selected from literature, science fiction movies, and a previous exploratory study. Second, we implemented a working prototype with which the users could interact with both hands and the preferred hand gestures with 2D and 3D visualizations of biochemical structures. We found that the gestures are influenced to significant extent by the fast paced developments in multimedia interfaces such as the Apple iPhone and the Nintendo Wii and to no lesser degree by decades of experience with the more traditional WIMP-based interfaces.

  20. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.

    Science.gov (United States)

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2011-06-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.

  1. Hands Off: Mentoring a Student-Led Robotics Team

    Science.gov (United States)

    Dolenc, Nathan R.; Mitchell, Claire E.; Tai, Robert H.

    2016-01-01

    Mentors play important roles in determining the working environment of out-of-school-time clubs. On robotics teams, they provide guidance in hopes that their protégés progress through an engineering process. This study examined how mentors on one robotics team who defined their mentoring style as "let the students do the work" navigated…

  2. Robotically facilitated virtual rehabilitation of arm transport integrated with finger movement in persons with hemiparesis

    Directory of Open Access Journals (Sweden)

    Davidow Amy

    2011-05-01

    Full Text Available Abstract Background Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Methods Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. Results The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Conclusions Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training.

  3. Applying a soft-robotic glove as assistive device and training tool with games to support hand function after stroke : Preliminary results on feasibility and potential clinical impact

    NARCIS (Netherlands)

    Prange, G.B.; Radder, Bob; Kottink, Anke I.R.; Melendez-Calderon, Alejandro; Buurke, Jaap H.; Rietman, Johan S.

    2017-01-01

    Recent technological developments regarding wearable soft-robotic devices extend beyond the current application of rehabilitation robotics and enable unobtrusive support of the arms and hands during daily activities. In this light, the HandinMind (HiM) system was developed, comprising a

  4. Real-time myoelectric control of a multi-fingered hand prosthesis using principal components analysis

    Directory of Open Access Journals (Sweden)

    Matrone Giulia C

    2012-06-01

    Full Text Available Abstract Background In spite of the advances made in the design of dexterous anthropomorphic hand prostheses, these sophisticated devices still lack adequate control interfaces which could allow amputees to operate them in an intuitive and close-to-natural way. In this study, an anthropomorphic five-fingered robotic hand, actuated by six motors, was used as a prosthetic hand emulator to assess the feasibility of a control approach based on Principal Components Analysis (PCA, specifically conceived to address this problem. Since it was demonstrated elsewhere that the first two principal components (PCs can describe the whole hand configuration space sufficiently well, the controller here employed reverted the PCA algorithm and allowed to drive a multi-DoF hand by combining a two-differential channels EMG input with these two PCs. Hence, the novelty of this approach stood in the PCA application for solving the challenging problem of best mapping the EMG inputs into the degrees of freedom (DoFs of the prosthesis. Methods A clinically viable two DoFs myoelectric controller, exploiting two differential channels, was developed and twelve able-bodied participants, divided in two groups, volunteered to control the hand in simple grasp trials, using forearm myoelectric signals. Task completion rates and times were measured. The first objective (assessed through one group of subjects was to understand the effectiveness of the approach; i.e., whether it is possible to drive the hand in real-time, with reasonable performance, in different grasps, also taking advantage of the direct visual feedback of the moving hand. The second objective (assessed through a different group was to investigate the intuitiveness, and therefore to assess statistical differences in the performance throughout three consecutive days. Results Subjects performed several grasp, transport and release trials with differently shaped objects, by operating the hand with the myoelectric

  5. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  6. Deep learning with convolutional neural networks: a resource for the control of robotic prosthetic hands via electromyography

    Directory of Open Access Journals (Sweden)

    Manfredo Atzori

    2016-09-01

    Full Text Available Motivation: Natural control methods based on surface electromyography and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications and commercial prostheses are in the best case capable to offer natural control for only a few movements. Objective: In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its capabilities for the natural control of robotic hands via surface electromyography by providing a baseline on a large number of intact and amputated subjects. Methods: We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 hand amputated subjects. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets.Results: The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods but lower than the results obtained with the best reference methods in our tests. Significance: The results show that convolutional neural networks with a very simple architecture can produce accuracy comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters can be fundamental for the analysis of surface electromyography data. Finally, the results suggest that deeper and more complex networks may increase dexterous control robustness, thus contributing to bridge the gap between the market and scientific research

  7. Applying a soft-robotic glove as assistive device and training tool with games to support hand function after stroke: Preliminary results on feasibility and potential clinical impact.

    Science.gov (United States)

    Prange-Lasonder, Gerdienke B; Radder, Bob; Kottink, Anke I R; Melendez-Calderon, Alejandro; Buurke, Jaap H; Rietman, Johan S

    2017-07-01

    Recent technological developments regarding wearable soft-robotic devices extend beyond the current application of rehabilitation robotics and enable unobtrusive support of the arms and hands during daily activities. In this light, the HandinMind (HiM) system was developed, comprising a soft-robotic, grip supporting glove with an added computer gaming environment. The present study aims to gain first insight into the feasibility of clinical application of the HiM system and its potential impact. In order to do so, both the direct influence of the HiM system on hand function as assistive device and its therapeutic potential, of either assistive or therapeutic use, were explored. A pilot randomized clinical trial was combined with a cross-sectional measurement (comparing performance with and without glove) at baseline in 5 chronic stroke patients, to investigate both the direct assistive and potential therapeutic effects of the HiM system. Extended use of the soft-robotic glove as assistive device at home or with dedicated gaming exercises in a clinical setting was applicable and feasible. A positive assistive effect of the soft-robotic glove was proposed for pinch strength and functional task performance 'lifting full cans' in most of the five participants. A potential therapeutic impact was suggested with predominantly improved hand strength in both participants with assistive use, and faster functional task performance in both participants with therapeutic application.

  8. Predicting workload profiles of brain-robot interface and electromygraphic neurofeedback with cortical resting-state networks: personal trait or task-specific challenge?

    Science.gov (United States)

    Fels, Meike; Bauer, Robert; Gharabaghi, Alireza

    2015-08-01

    Objective. Novel rehabilitation strategies apply robot-assisted exercises and neurofeedback tasks to facilitate intensive motor training. We aimed to disentangle task-specific and subject-related contributions to the perceived workload of these interventions and the related cortical activation patterns. Approach. We assessed the perceived workload with the NASA Task Load Index in twenty-one subjects who were exposed to two different feedback tasks in a cross-over design: (i) brain-robot interface (BRI) with haptic/proprioceptive feedback of sensorimotor oscillations related to motor imagery, and (ii) control of neuromuscular activity with feedback of the electromyography (EMG) of the same hand. We also used electroencephalography to examine the cortical activation patterns beforehand in resting state and during the training session of each task. Main results. The workload profile of BRI feedback differed from EMG feedback and was particularly characterized by the experience of frustration. The frustration level was highly correlated across tasks, suggesting subject-related relevance of this workload component. Those subjects who were specifically challenged by the respective tasks could be detected by an interhemispheric alpha-band network in resting state before the training and by their sensorimotor theta-band activation pattern during the exercise. Significance. Neurophysiological profiles in resting state and during the exercise may provide task-independent workload markers for monitoring and matching participants’ ability and task difficulty of neurofeedback interventions.

  9. Intelligent control of robotic arm/hand systems for the NASA EVA retriever using neural networks

    Science.gov (United States)

    Mclauchlan, Robert A.

    1989-01-01

    Adaptive/general learning algorithms using varying neural network models are considered for the intelligent control of robotic arm plus dextrous hand/manipulator systems. Results are summarized and discussed for the use of the Barto/Sutton/Anderson neuronlike, unsupervised learning controller as applied to the stabilization of an inverted pendulum on a cart system. Recommendations are made for the application of the controller and a kinematic analysis for trajectory planning to simple object retrieval (chase/approach and capture/grasp) scenarios in two dimensions.

  10. Influence of Attachment Pressure and Kinematic Configuration on pHRI with Wearable Robots

    Directory of Open Access Journals (Sweden)

    André Schiele

    2009-01-01

    Full Text Available The goal of this paper is to show the influence of exoskeleton attachment, such as the pressure on the fixation cuffs and alignment of the robot joint to the human joint, on subjective and objective performance metrics (i.e. comfort, mental load, interface forces, tracking error and available workspace during a typical physical human-robot interaction (pHRI experiment. A mathematical model of a single degree of freedom interaction between humans and a wearable robot is presented and used to explain the causes and characteristics of interface forces between the two. The pHRI model parameters (real joint offsets, attachment stiffness are estimated from experimental interface force measurements acquired during tests with 14 subjects. Insights gained by the model allow optimisation of the exoskeleton kinematics. This paper shows that offsets of more than ±10 cm exist between human and robot axes of rotation, even if a well-designed exoskeleton is aligned properly before motion. Such offsets can create interface loads of up to 200 N and 1.5 Nm in the absence of actuation. The optimal attachment pressure is determined to be 20 mmHg and the attachment stiffness is about 300 N/m. Inclusion of passive compensation joints in the exoskeleton is shown to lower the interaction forces significantly, which enables a more ergonomic pHRI.

  11. Towards Versatile Robots Through Open Heterogeneous Modular Robots

    DEFF Research Database (Denmark)

    Lyder, Andreas

    arises, a new robot can be assembled rapidly from the existing modules, in contrast to conventional robots, which require a time consuming and expensive development process. In this thesis we define a modular robot to be a robot consisting of dynamically reconfigurable modules. The goal of this thesis......Robots are important tools in our everyday life. Both in industry and at the consumer level they serve the purpose of increasing our scope and extending our capabilities. Modular robots take the next step, allowing us to easily create and build various robots from a set of modules. If a problem...... is to increase the versatility and practical usability of modular robots by introducing new conceptual designs. Until now modular robots have been based on a pre-specified set of modules, and thus, their functionality is limited. We propose an open heterogeneous design concept, which allows a modular robot...

  12. Optical coherence tomography based 1D to 6D eye-in-hand calibration

    DEFF Research Database (Denmark)

    Antoni, Sven Thomas; Otte, Christoph; Savarimuthu, Thiusius Rajeeth

    2017-01-01

    and based on this introduce pivot+d, a new 1D to 6D eye-in-hand calibration. We provide detailed results on the convergence and accuracy of our method and use translational and rotational ground truth to show that our methods allow for submillimeter positioning accuracy of an OCT beam with a robot.......e., it can be easily integrated with instruments. However, to use OCT for intra-operative guidance its spatial alignment needs to be established. Hence, we consider eye-in-hand calibration between the 1D OCT imaging and a 6D robotic position system. We present a method to perform pivot calibration for OCT....... For pivot calibration we observe a mean translational error of 0.5161 ± 0.4549 mm while pivot+d shows 0.3772 ± 0.2383 mm. Additionally, pivot+d improves rotation detection by about 8° when compared to pivot calibration....

  13. Stable Myoelectric Control of a Hand Prosthesis using Non-Linear Incremental Learning

    Directory of Open Access Journals (Sweden)

    Arjan eGijsberts

    2014-02-01

    Full Text Available Stable myoelectric control of hand prostheses remains an open problem. The only successful human-machine interface is surface electromyography, typically allowing control of a few degrees of freedom. Machine learning techniques may have the potential to remove these limitations, but their performance is thus far inadequate: myoelectric signals change over time under the influence of various factors, deteriorating control performance. It is therefore necessary, in the standard approach, to regularly retrain a new model from scratch.We hereby propose a non-linear incremental learning method in which occasional updates with a modest amount of novel training data allow continual adaptation to the changes in the signals. In particular, Incremental Ridge Regression and an approximation of the Gaussian Kernel known as Random Fourier Features are combined to predict finger forces from myoelectric signals, both finger-by-finger and grouped in grasping patterns.We show that the approach is effective and practically applicable to this problem by first analyzing its performance while predicting single-finger forces. Surface electromyography and finger forces were collected from 10 intact subjects during four sessions spread over two different days; the results of the analysis show that small incremental updates are indeed effective to maintain a stable level of performance.Subsequently, we employed the same method on-line to teleoperate a humanoid robotic arm equipped with a state-of-the-art commercial prosthetic hand. The subject could reliably grasp, carry and release everyday-life objects, enforcing stable grasping irrespective of the signal changes, hand/arm movements and wrist pronation and supination.

  14. Neural manual vs. robotic assisted mobilization to improve motion and reduce pain hypersensitivity in hand osteoarthritis: study protocol for a randomized controlled trial.

    Science.gov (United States)

    Villafañe, Jorge Hugo; Valdes, Kristin; Imperio, Grace; Borboni, Alberto; Cantero-Téllez, Raquel; Galeri, Silvia; Negrini, Stefano

    2017-05-01

    [Purpose] The aim of the present study is to detail the protocol for a randomised controlled trial (RCT) of neural manual vs. robotic assisted on pain in sensitivity as well as analyse the quantitative and qualitative movement of hand in subjects with hand osteoarthritis. [Subjects and Methods] Seventy-two patients, aged 50 to 90 years old of both genders, with a diagnosis of hand Osteoarthritis (OA), will be recruited. Two groups of 36 participants will receive an experimental intervention (neurodynamic mobilization intervention plus exercise) or a control intervention (robotic assisted passive mobilization plus exercise) for 12 sessions over 4 weeks. Assessment points will be at baseline, end of therapy, and 1 and 3 months after end of therapy. The outcomes of this intervention will be pain and determine the central pain processing mechanisms. [Result] Not applicable. [Conclusion] If there is a reduction in pain hypersensitivity in hand OA patients it can suggest that supraspinal pain-inhibitory areas, including the periaqueductal gray matter, can be stimulated by joint mobilization.

  15. Method and apparatus for automatic control of a humanoid robot

    Science.gov (United States)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Reiland, Matthew J (Inventor); Sanders, Adam M (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  16. Building technology platform aimed to develop service robot with embedded personality and enhanced communication with social environment

    Directory of Open Access Journals (Sweden)

    Aleksandar Rodić

    2015-04-01

    Full Text Available The paper is addressed to prototyping of technology platform aimed to develop of ambient-aware human-centric indoor service robot with attributes of emotional intelligence to enhance interaction with social environment. The robot consists of a wheel-based mobile platform with spinal (segmented torso, bi-manual manipulation system with multi-finger robot hands and robot head. Robot prototype was designed to see, hear, speak and use its multimodal interface for enhanced communication with humans. Robot is capable of demonstrating its affective and social behavior by using audio and video interface as well as body gestures. Robot is equipped with advanced perceptive system based on heterogeneous sensorial system, including laser range finder, ultrasonic distance sensors and proximity detectors, 3-axis inertial sensor (accelerometer and gyroscope, stereo vision system, 2 wide-range microphones, and 2 loudspeakers. The device is foreseen to operate autonomously but it may be also operated remotely from a host computer through wireless communication link as well as by use of a smart-phone based on advanced client-server architecture. Robot prototype has embedded attributes of artificial intelligence and utilizes advanced cognitive capabilities such as spatial reasoning, obstacle and collision avoidance, simultaneous localization and mapping, etc. Robot is designed in a manner to enable uploading of new or changing existing algorithms of emotional intelligence that should provide to robot human-like affective and social behavior. The key objective of the project presented in the paper regards to building advanced technology platform for research and development of personal robots aimed to use for different purpose, e.g. robot-entertainer, battler, robot for medical care, security robot, etc. In a word, the designed technology platform is expected to help in development human-centered service robots to be used at home, in the office, public institutions

  17. Laparoscopic hand-assisted versus robotic-assisted laparoscopic sleeve gastrectomy: experience of 103 consecutive cases.

    Science.gov (United States)

    Kannan, Umashankkar; Ecker, Brett L; Choudhury, Rashikh; Dempsey, Daniel T; Williams, Noel N; Dumon, Kristoffel R

    2016-01-01

    Laparoscopic sleeve gastrectomy has become a stand-alone procedure in the treatment of morbid obesity. There are very few reports on the use of robotic approach in sleeve gastrectomy. The purpose of this retrospective study is to report our early experience of robotic-assisted laparoscopic sleeve gastrectomy (RALSG) using a proctored training model with comparison to an institutional cohort of patients who underwent laparoscopic hand-assisted sleeve gastrectomy (LASG). University hospital. The study included 108 patients who underwent sleeve gastrectomy either via the laparoscopic-assisted or robot-assisted approach during the study period. Of these 108 patients, 62 underwent LASG and 46 underwent RALSG. The console surgeon in the RALSG is a clinical year 4 (CY4) surgery resident. All CY4 surgery residents received targeted simulation training before their rotation. The console surgeon is proctored by the primary surgeon with assistance as needed by the second surgeon. The patients in the robotic and laparoscopic cohorts did not have a statistical difference in their demographic characteristics, preoperative co-morbidities, or complications. The mean operating time did not differ significantly between the 2 cohorts (121 min versus 110 min, P = .07). Patient follow-up in the LSG and RALSG were 91% and 90% at 3 months, 62% and 64% at 6 months, and 60% and 55% at 1 year, respectively. The mean percentage estimated weight loss (EWL%) at 3 months, 6 months, and 1 year was greater in the robotic group but not statistically significant (27 versus 22 at 3 mo [P = .05] and 39 versus 34 at 6 mo [P = .025], 57 versus 48 at 1 yr [P = .09]). There was no mortality in either group. Early results of our experience with RALSG indicate low perioperative complication rates and comparable weight loss with LASG. The concept of a stepwise education model needs further validation with larger studies. Copyright © 2016 American Society for Bariatric Surgery. Published by Elsevier Inc

  18. Methods in the analysis of mobile robots behavior in unstructured environment

    Science.gov (United States)

    Mondoc, Alina; Dolga, Valer; Gorie, Nina

    2012-11-01

    A mobile robot can be described as a mechatronic system that must execute an application in a working environment. From mechatronic concept, the authors highlight mechatronic system structure based on its secondary function. Mobile robot will move, either in a known environment - structured environment may be described in time by an appropriate mathematical model or in an unfamiliar environment - unstructured - the random aspects prevail. Starting from a point robot must reach a START STOP point in the context of functional constraints imposed on the one hand, the application that, on the other hand, the working environment. The authors focus their presentation on unstructured environment. In this case the evolution of mobile robot is based on obtaining information in the work environment, their processing and integration results in action strategy. Number of sensory elements used is subject to optimization parameter. Starting from a known structure of mobile robot, the authors analyze the possibility of developing a mathematical model variants mathematical contact wheel - ground. It analyzes the various types of soil and the possibility of obtaining a "signature" on it based on sensory information. Theoretical aspects of the problem are compared to experimental results obtained in robot evolution. The mathematical model of the robot system allowed the simulation environment and its evolution in comparison with the experimental results estimated.

  19. Design and Preliminary Feasibility Study of a Soft Robotic Glove for Hand Function Assistance in Stroke Survivors.

    Science.gov (United States)

    Yap, Hong Kai; Lim, Jeong Hoon; Nasrallah, Fatima; Yeow, Chen-Hua

    2017-01-01

    Various robotic exoskeletons have been proposed for hand function assistance during activities of daily living (ADL) of stroke survivors. However, traditional exoskeletons involve the use of complex rigid systems that impede the natural movement of joints, and thus reduce the wearability and cause discomfort to the user. The objective of this paper is to design and evaluate a soft robotic glove that is able to provide hand function assistance using fabric-reinforced soft pneumatic actuators. These actuators are made of silicone rubber which has an elastic modulus similar to human tissues. Thus, they are intrinsically soft and compliant. Upon air pressurization, they are able to support finger range of motion (ROM) and generate the desired actuation of the finger joints. In this work, the soft actuators were characterized in terms of their blocked tip force, normal and frictional grip force outputs. Combining the soft actuators and flexible textile materials, a soft robotic glove was developed for grasping assistance during ADL for stroke survivors. The glove was evaluated on five healthy participants for its assisted ROM and grip strength. Pilot test was performed in two stroke survivors to evaluate the efficacy of the glove in assisting functional grasping activities. Our results demonstrated that the actuators designed in this study could generate desired force output at a low air pressure. The glove had a high kinematic transparency and did not affect the active ROM of the finger joints when it was being worn by the participants. With the assistance of the glove, the participants were able to perform grasping actions with sufficient assisted ROM and grip strength, without any voluntary effort. Additionally, pilot test on stroke survivors demonstrated that the patient's grasping performance improved with the presence and assistance of the glove. Patient feedback questionnaires also showed high level of patient satisfaction and comfort. In conclusion, this paper

  20. Underactuated hands : Fundamentals, performance analysis and design

    NARCIS (Netherlands)

    Kragten, G.A.

    2011-01-01

    There is an emerging need to apply adaptive robotic hands to substitute humans in dangerous, laborious, or monotonous work. The state-of-the-art robotic hands cannot fulfill this need, because they are expensive, hard to control and they consist of many vulnerable motors and sensors. It is aimed to

  1. Hand-in-hand advances in biomedical engineering and sensorimotor restoration.

    Science.gov (United States)

    Pisotta, Iolanda; Perruchoud, David; Ionta, Silvio

    2015-05-15

    Living in a multisensory world entails the continuous sensory processing of environmental information in order to enact appropriate motor routines. The interaction between our body and our brain is the crucial factor for achieving such sensorimotor integration ability. Several clinical conditions dramatically affect the constant body-brain exchange, but the latest developments in biomedical engineering provide promising solutions for overcoming this communication breakdown. The ultimate technological developments succeeded in transforming neuronal electrical activity into computational input for robotic devices, giving birth to the era of the so-called brain-machine interfaces. Combining rehabilitation robotics and experimental neuroscience the rise of brain-machine interfaces into clinical protocols provided the technological solution for bypassing the neural disconnection and restore sensorimotor function. Based on these advances, the recovery of sensorimotor functionality is progressively becoming a concrete reality. However, despite the success of several recent techniques, some open issues still need to be addressed. Typical interventions for sensorimotor deficits include pharmaceutical treatments and manual/robotic assistance in passive movements. These procedures achieve symptoms relief but their applicability to more severe disconnection pathologies is limited (e.g. spinal cord injury or amputation). Here we review how state-of-the-art solutions in biomedical engineering are continuously increasing expectances in sensorimotor rehabilitation, as well as the current challenges especially with regards to the translation of the signals from brain-machine interfaces into sensory feedback and the incorporation of brain-machine interfaces into daily activities. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko

    2011-01-01

    Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed

  3. FY 1999 achievement report on the R and D of a human cooperation/coexistence robot system. New development for the commercialization for the electric power generation technology; 1999 nendo ningen kyocho kyozongata robot system kenkyu kaihatsu seika hokokusho. Shinhatsuden gijutsu jitsuyoka kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    The paper described the FY 1999 results of the development of a human cooperation/coexistence robot system and the development for commercialization for power plants. The support robot platform for maintenance, etc. was fabricated, connected with the remote operation system and verified of the integrated function. The operator controls the robot from the remote operation cockpit by HMD (head mounted display) which can present image following the head movement of the operator, extended virtual reality technology, and stereo-sound system. Hand/arm movement and instruction for movement were given to the robot, and at the same time, the robot was made a device which can present inner force sense and bodily sensation to the operator. The remote hand operating software was developed. A method was developed by which the information on visual sense, touch sense and somatic sense is presented in realtime to the operator so that he can obtain a feeling of attendance. A model for sensor simulator verification was also developed so that the developer of software can also make a verification experiment in the actual environment. Interface was developed so that library of basic movements can be used in the network environment. An investigational research on the promotion of robot was made. (NEDO)

  4. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  5. Slip detection with accelerometer and tactile sensors in a robotic hand model

    Science.gov (United States)

    Al-Shanoon, Abdulrahman Abdulkareem S.; Anom Ahmad, Siti; Hassan, Mohd. Khair b.

    2015-11-01

    Grasp planning is an interesting issue in studies that dedicated efforts to investigate tactile sensors. This study investigated the physical force interaction between a tactile pressure sensor and a particular object. It also characterized object slipping during gripping operations and presented secure regripping of an object. Acceleration force was analyzed using an accelerometer sensor to establish a completely autonomous robotic hand model. An automatic feedback control system was applied to regrip the particular object when it commences to slip. Empirical findings were presented in consideration of the detection and subsequent control of the slippage situation. These findings revealed the correlation between the distance of the object slipping and the required force to regrip the object safely. This approach is similar to Hooke's law formula.

  6. Cloud Robotics Model

    OpenAIRE

    Mester, Gyula

    2015-01-01

    Cloud Robotics was born from the merger of service robotics and cloud technologies. It allows robots to benefit from the powerful computational, storage, and communications resources of modern data centres. Cloud robotics allows robots to take advantage of the rapid increase in data transfer rates to offload tasks without hard real time requirements. Cloud Robotics has rapidly gained momentum with initiatives by companies such as Google, Willow Garage and Gostai as well as more than a dozen a...

  7. Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    Science.gov (United States)

    Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer

    2011-01-01

    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator

  8. Spatial Programming for Industrial Robots through Task Demonstration

    Directory of Open Access Journals (Sweden)

    Jens Lambrecht

    2013-05-01

    Full Text Available Abstract We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks.

  9. The development of robotic systems for hazardous environments

    International Nuclear Information System (INIS)

    Collis-Smith, J.A.; Schilling, R.

    1996-01-01

    The need for teleoperated and robotic systems is growing. This growth is driven by several factors such as - statutory requirements; risk reduction and economic pressures. Robotic Systems are needed to provide reliable, economic means to perform surveillance, quantitative inspection, repairs, upgrading and eventual dismantling for decommissioning tasks. The range of potential applications has widened and there is now significant technical cross-fertilisation between developments in diverse environments. The typical robotic system consists of the emplacement equipment, the dextrous arm, the tool and the controls. The control system provides the operator with an integrated interface between the principal components, so that the operator can concentrate fully at the high level on the specific task in hand, while the control system and its software performs all the detail functions within the subparts of the integrated system. This paper develops this underlying logic, and is illustrated by experience drawn from a variety of examples in different environments to show the present state of the art in GEC Alsthom and suggest the way ahead in the near-term future. (Author)

  10. Mobility Systems For Robotic Vehicles

    Science.gov (United States)

    Chun, Wendell

    1987-02-01

    The majority of existing robotic systems can be decomposed into five distinct subsystems: locomotion, control/man-machine interface (MMI), sensors, power source, and manipulator. When designing robotic vehicles, there are two main requirements: first, to design for the environment and second, for the task. The environment can be correlated with known missions. This can be seen by analyzing existing mobile robots. Ground mobile systems are generally wheeled, tracked, or legged. More recently, underwater vehicles have gained greater attention. For example, Jason Jr. made history by surveying the sunken luxury liner, the Titanic. The next big surge of robotic vehicles will be in space. This will evolve as a result of NASA's commitment to the Space Station. The foreseeable robots will interface with current systems as well as standalone, free-flying systems. A space robotic vehicle is similar to its underwater counterpart with very few differences. Their commonality includes missions and degrees-of-freedom. The issues of stability and communication are inherent in both systems and environment.

  11. Illusory movement perception improves motor control for prosthetic hands

    Science.gov (United States)

    Marasco, Paul D.; Hebert, Jacqueline S.; Sensinger, Jon W.; Shell, Courtney E.; Schofield, Jonathon S.; Thumser, Zachary C.; Nataraj, Raviraj; Beckler, Dylan T.; Dawson, Michael R.; Blustein, Dan H.; Gill, Satinder; Mensh, Brett D.; Granja-Vazquez, Rafael; Newcomb, Madeline D.; Carey, Jason P.; Orzell, Beth M.

    2018-01-01

    To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement’s progress. This largely non-conscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. Here we report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. PMID:29540617

  12. An Exoskeleton Robot for Human Forearm and Wrist Motion Assist

    Science.gov (United States)

    Ranathunga Arachchilage Ruwan Chandra Gopura; Kiguchi, Kazuo

    The exoskeleton robot is worn by the human operator as an orthotic device. Its joints and links correspond to those of the human body. The same system operated in different modes can be used for different fundamental applications; a human-amplifier, haptic interface, rehabilitation device and assistive device sharing a portion of the external load with the operator. We have been developing exoskeleton robots for assisting the motion of physically weak individuals such as elderly or slightly disabled in daily life. In this paper, we propose a three degree of freedom (3DOF) exoskeleton robot (W-EXOS) for the forearm pronation/ supination motion, wrist flexion/extension motion and ulnar/radial deviation. The paper describes the wrist anatomy toward the development of the exoskeleton robot, the hardware design of the exoskeleton robot and EMG-based control method. The skin surface electromyographic (EMG) signals of muscles in forearm of the exoskeletons' user and the hand force/forearm torque are used as input information for the controller. By applying the skin surface EMG signals as main input signals to the controller, automatic control of the robot can be realized without manipulating any other equipment. Fuzzy control method has been applied to realize the natural and flexible motion assist. Experiments have been performed to evaluate the proposed exoskeleton robot and its control method.

  13. Lending a helping hand: toward novel assistive robotic arms

    NARCIS (Netherlands)

    Groothuis, Stefan; Stramigioli, Stefano; Carloni, Raffaella

    Assistive robotics is an increasingly popular research field, which has led to a large number of commercial and noncommercial systems aimed at assisting physically impaired or elderly users in the activities of daily living. In this article, we propose five criteria based on robotic arm usage

  14. Prosthetic hand sensor placement: Analysis of touch perception during the grasp

    Directory of Open Access Journals (Sweden)

    Mirković Bojana

    2014-01-01

    Full Text Available Humans rely on their hands to perform everyday tasks. The hand is used as a tool, but also as the interface to “sense” the world. Current prosthetic hands are based on sophisticated multi-fingered structures, and include many sensors which counterpart natural proprioceptors and exteroceptors. The sensory information is used for control, but not sent to the user of the hand (amputee. Grasping without sensing is not good enough. This research is part of the development of the sensing interface for amputees, specifically addressing the analysis of human perception while grasping. The goal is to determine the small number of preferred positions of sensors on the prosthetic hand. This task has previously been approached by trying to replicate a natural sensory system characteristic for healthy humans, resulting in a multitude of redundant sensors and basic inability to make the patient aware of the sensor readings on the subconscious level. We based our artificial perception system on the reported sensations of humans when grasping various objects without seeing the objects (obstructed visual feedback. Subjects, with no known sensory deficits, were asked to report on the touch sensation while grasping. The analysis included objects of various sizes, weights, textures and temperatures. Based on this data we formed a map of the preferred positions for the sensors that is appropriate for five finger human-like robotic hand. The final map was intentionally minimized in size (number of sensors.

  15. Ani-Bot: A Mixed-Reality Ready Modular Robotics System

    OpenAIRE

    Xu, Zhuangying; Cao, Yuanzhi

    2017-01-01

    DIY modular robotics has always had a strong appeal to makers and designers; being able to quickly design, build, and animate their own robots opens the possibility of bringing imaginations to life. However, current interfaces to control and program the DIY robot either lacks connection and consistency between the users and target (Graphical User Interface) or suffers from limited control capabilities due to the lack of versatility and functionality (Tangible User interface). We present Ani-B...

  16. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  17. Admittance-Based Upper Limb Robotic Active and Active-Assistive Movements

    Directory of Open Access Journals (Sweden)

    Cristóbal Ochoa Luna

    2015-09-01

    Full Text Available This paper presents two rehabilitation schemes for patients with upper limb impairments. The first is an active-assistive scheme based on the trajectory tracking of predefined paths in Cartesian space. In it, the system allows for an adjustable degree of variation with respect to ideal tracking. The amount of variation is determined through an admittance function that depends on the opposition forces exerted on the system by the user, due to possible impairments. The coefficients of the function allow the adjustment of the degree of assistance the robot will provide in order to complete the target trajectory. The second scheme corresponds to active movements in a constrained space. Here, the same admittance function is applied; however, in this case, it is unattached to a predefined trajectory and instead connected to one generated in real time, according to the user's intended movements. This allows the user to move freely with the robot in order to track a given path. The free movement is bounded through the use of virtual walls that do not allow users to exceed certain limits. A human-machine interface was developed to guide the robot's user.

  18. Enhanced operator interface for hand-held landmine detector

    Science.gov (United States)

    Herman, Herman; McMahill, Jeffrey D.; Kantor, George

    2001-10-01

    As landmines get harder to detect, the complexity of landmine detectors has also been increasing. To increase the probability of detection and decrease the false alarm rate of low metallic landmines, many detectors employ multiple sensing modalities, which include radar and metal detector. Unfortunately, the operator interface for these new detectors stays pretty much the same as for the older detectors. Although the amount of information that the new detectors acquire has increased significantly, the interface has been limited to a simple audio interface. We are currently developing a hybrid audiovisual interface for enhancing the overall performance of the detector. The hybrid audiovisual interface combines the simplicity of the audio output with the rich spatial content of the video display. It is designed to optimally present the output of the detector and also to give the proper feedback to the operator. Instead of presenting all the data to the operator simultaneously, the interface allows the operator to access the information as needed. This capability is critical to avoid information overload, which can significantly reduce the performance of the operator. The audio is used as the primary notification signal, while the video is used for further feedback, discrimination, localization and sensor fusion. The idea is to let the operator gets the feedback that he needs and enable him to look at the data in the most efficient way. We are also looking at a hybrid man-machine detection system which utilizes precise sweeping by the machine and powerful human cognitive ability. In such a hybrid system, the operator is free to concentrate on discriminant task, such as manually fusing the output of the different sensing modalities, instead of worrying about the proper sweep technique. In developing this concept, we have been using the virtual mien lane to validate some of these concepts. We obtained some very encouraging results form our preliminary test. It clearly

  19. Use of Design Patterns According to Hand Dominance in a Mobile User Interface

    Science.gov (United States)

    Al-Samarraie, Hosam; Ahmad, Yusof

    2016-01-01

    User interface (UI) design patterns for mobile applications provide a solution to design problems and can improve the usage experience for users. However, there is a lack of research categorizing the uses of design patterns according to users' hand dominance in a learning-based mobile UI. We classified the main design patterns for mobile…

  20. Hand-Eye LRF-Based Iterative Plane Detection Method for Autonomous Robotic Welding

    Directory of Open Access Journals (Sweden)

    Sungmin Lee

    2015-12-01

    Full Text Available This paper proposes a hand-eye LRF-based (laser range finder welding plane-detection method for autonomous robotic welding in the field of shipbuilding. The hand-eye LRF system consists of a 6 DOF manipulator and an LRF attached to the wrist of the manipulator. The welding plane is detected by the LRF with only the wrist's rotation to minimize a mechanical error caused by the manipulator's motion. A position on the plane is determined as an average position of the detected points on the plane, and a normal vector to the plane is determined by applying PCA (principal component analysis to the detected points. In this case, the accuracy of the detected plane is analysed by simulations with respect to the wrist's angle interval and the plane angle. As a result of the analysis, an iterative plane-detection method with the manipulator's alignment motion is proposed to improve the performance of plane detection. For verifying the feasibility and effectiveness of the proposed plane-detection method, experiments are carried out with a prototype of the hand-eye LRF-based system, which consists of a 1 DOF wrist's joint, an LRF system and a rotatable plane. In addition, the experimental results of the PCA-based plane detection method are compared with those of the two representative plane-detection methods, based on RANSAC (RANdom SAmple Consensus and the 3D Hough transform in both accuracy and computation time's points of view.

  1. Design Minimalism in Robotics Programming

    Directory of Open Access Journals (Sweden)

    Anthony Cowley

    2008-11-01

    Full Text Available With the increasing use of general robotic platforms in different application scenarios, modularity and reusability have become key issues in effective robotics programming. In this paper, we present a minimalist approach for designing robot software, in which very simple modules, with well designed interfaces and very little redundancy can be connected through a strongly typed framework to specify and execute different robotics tasks.

  2. Design Minimalism in Robotics Programming

    Directory of Open Access Journals (Sweden)

    Anthony Cowley

    2006-03-01

    Full Text Available With the increasing use of general robotic platforms in different application scenarios, modularity and reusability have become key issues in effective robotics programming. In this paper, we present a minimalist approach for designing robot software, in which very simple modules, with well designed interfaces and very little redundancy can be connected through a strongly typed framework to specify and execute different robotics tasks.

  3. Wearable computer for mobile augmented-reality-based controlling of an intelligent robot

    Science.gov (United States)

    Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino

    2000-10-01

    An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.

  4. Design-validation of a hand exoskeleton using musculoskeletal modeling.

    Science.gov (United States)

    Hansen, Clint; Gosselin, Florian; Ben Mansour, Khalil; Devos, Pierre; Marin, Frederic

    2018-04-01

    Exoskeletons are progressively reaching homes and workplaces, allowing interaction with virtual environments, remote control of robots, or assisting human operators in carrying heavy loads. Their design is however still a challenge as these robots, being mechanically linked to the operators who wear them, have to meet ergonomic constraints besides usual robotic requirements in terms of workspace, speed, or efforts. They have in particular to fit the anthropometry and mobility of their users. This traditionally results in numerous prototypes which are progressively fitted to each individual person. In this paper, we propose instead to validate the design of a hand exoskeleton in a fully digital environment, without the need for a physical prototype. The purpose of this study is thus to examine whether finger kinematics are altered when using a given hand exoskeleton. Therefore, user specific musculoskeletal models were created and driven by a motion capture system to evaluate the fingers' joint kinematics when performing two industrial related tasks. The kinematic chain of the exoskeleton was added to the musculoskeletal models and its compliance with the hand movements was evaluated. Our results show that the proposed exoskeleton design does not influence fingers' joints angles, the coefficient of determination between the model with and without exoskeleton being consistently high (R 2 ¯=0.93) and the nRMSE consistently low (nRMSE¯ = 5.42°). These results are promising and this approach combining musculoskeletal and robotic modeling driven by motion capture data could be a key factor in the ergonomics validation of the design of orthotic devices and exoskeletons prior to manufacturing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Innovation in Robotic Surgery: The Indian Scenario

    Directory of Open Access Journals (Sweden)

    Suresh V Deshpande

    2015-01-01

    Full Text Available Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.

  6. Innovation in robotic surgery: the Indian scenario.

    Science.gov (United States)

    Deshpande, Suresh V

    2015-01-01

    Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.

  7. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Science.gov (United States)

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  8. From robot to human grasping simulation

    CERN Document Server

    León, Beatriz; Sancho-Bru, Joaquin

    2013-01-01

    The human hand and its dexterity in grasping and manipulating objects are some of the hallmarks of the human species. For years, anatomic and biomechanical studies have deepened the understanding of the human hand’s functioning and, in parallel, the robotics community has been working on the design of robotic hands capable of manipulating objects with a performance similar to that of the human hand. However, although many researchers have partially studied various aspects, to date there has been no comprehensive characterization of the human hand’s function for grasping and manipulation of

  9. AssistMe robot, an assistance robotic platform

    Directory of Open Access Journals (Sweden)

    A. I. Alexan

    2012-06-01

    Full Text Available This paper presents the design and implementation of a full size assistance robot. Its main purpose it to assist a person and eventually avoid a life threatening situation. Its implementation revolves around a chipKIT Arduino board that interconnects a robotic base controller with a 7 inch TABLET PC and various sensors. Due to the Android and Arduino combination, the robot can interact with the person and provide an easy development platform for future improvement and feature adding. The TABLET PC is Webcam, WIFI and Bluetooth enabled, offering a versatile platform that is able to process data and in the same time provide the user a friendly interface.

  10. Whole glove permeation of cyclohexanol through disposable nitrile gloves on a dextrous robot hand: Fist clenching vs. non-clenching.

    Science.gov (United States)

    Mathews, Airek R; Que Hee, Shane S

    2017-04-01

    The differences in permeation parameters when a gloved dextrous robot hand clenched and did not were investigated with the dynamic permeation system described in the companion paper. Increased permeation through the gloves of the present study for cyclohexanol when the gloved hand clenched depended on glove thickness and porosity for cyclohexanol permeation. The Sterling glove, the thinnest and most porous, was the least protective. Hand clenching promoted more permeation for the Sterling glove in terms of breakthrough times, steady state permeation rate, and diffusion coefficient. The Safeskin glove showed increased permeation only for the steady state permeation rate but not breakthrough times or diffusion coefficient. The Blue and Purple gloves showed no differences when the hand was clenching or not. The correlational analysis supported differences between the clenching and non-clenching situations, and the risk assessment considered the worst and best scenarios relative to one and two hydrated hands that were and were not protected by specific gloves.

  11. SOPHIA: Soft Orthotic Physiotherapy Hand Interactive Aid

    Directory of Open Access Journals (Sweden)

    Alistair C. McConnell

    2017-06-01

    Full Text Available This work describes the design, fabrication, and initial testing of a Soft Orthotic Physiotherapy Hand Interactive Aid (SOPHIA for stroke rehabilitation. SOPHIA consists of (1 a soft robotic exoskeleton, (2 a microcontroller-based control system driven by a brain–machine interface (BMI, and (3 a sensorized glove for passive rehabilitation. In contrast to other rehabilitation devices, SOPHIA is the first modular prototype of a rehabilitation system that is capable of three tasks: aiding extension based assistive rehabilitation, monitoring patient exercises, and guiding passive rehabilitation. Our results show that this prototype of the device is capable of helping healthy subjects to open their hand. Finger extension is triggered by a command from the BMI, while using a variety of sensors to ensure a safe motion. All data gathered from the device will be used to guide further improvements to the prototype, aiming at developing specifications for the next generation device, which could be used in future clinical trials.

  12. Human-robot interaction strategies for walker-assisted locomotion

    CERN Document Server

    Cifuentes, Carlos A

    2016-01-01

    This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed. .

  13. Interface colloidal robotic manipulator

    Science.gov (United States)

    Aronson, Igor; Snezhko, Oleksiy

    2015-08-04

    A magnetic colloidal system confined at the interface between two immiscible liquids and energized by an alternating magnetic field dynamically self-assembles into localized asters and arrays of asters. The colloidal system exhibits locomotion and shape change. By controlling a small external magnetic field applied parallel to the interface, structures can capture, transport, and position target particles.

  14. TeMoto: Intuitive Multi-Range Telerobotic System with Natural Gestural and Verbal Instruction Interface

    Directory of Open Access Journals (Sweden)

    Robert Valner

    2018-02-01

    Full Text Available Teleoperated mobile robots, equipped with object manipulation capabilities, provide safe means for executing dangerous tasks in hazardous environments without putting humans at risk. However, mainly due to a communication delay, complex operator interfaces and insufficient Situational Awareness (SA, the task productivity of telerobots remains inferior to human workers. This paper addresses the shortcomings of telerobots by proposing a combined approach of (i a scalable and intuitive operator interface with gestural and verbal input, (ii improved Situational Awareness (SA through sensor fusion according to documented best practices, (iii integrated virtual fixtures for task simplification and minimizing the operator’s cognitive burden and (iv integrated semiautonomous behaviors that further reduce cognitive burden and negate the impact of communication delays, execution latency and/or failures. The proposed teleoperation system, TeMoto, is implemented using ROS (Robot Operating System to ensure hardware agnosticism, extensibility and community access. The operator’s command interface consists of a Leap Motion Controller for hand tracking, Griffin PowerMate USB as turn knob for scaling and a microphone for speech input. TeMoto is evaluated on multiple robots including two mobile manipulator platforms. In addition to standard, task-specific evaluation techniques (completion time, user studies, number of steps, etc.—which are platform and task dependent and thus difficult to scale—this paper presents additional metrics for evaluating the user interface including task-independent criteria for measuring generalized (i task completion efficiency and (ii operator context switching.

  15. The Arrival of Robotics in Spine Surgery: A Review of the Literature.

    Science.gov (United States)

    Ghasem, Alexander; Sharma, Akhil; Greif, Dylan N; Alam, Milad; Maaieh, Motasem Al

    2018-04-18

    Systematic Review. The authors aim to review comparative outcome measures between robotic and free-hand spine surgical procedures including: accuracy of spinal instrumentation, radiation exposure, operative time, hospital stay, and complication rates. Misplacement of pedicle screws in conventional open as well as minimally invasive surgical procedures has prompted the need for innovation and allowed the emergence of robotics in spine surgery. Prior to incorporation of robotic surgery in routine practice, demonstration of improved instrumentation accuracy, operative efficiency, and patient safety is required. A systematic search of the PubMed, OVID-MEDLINE, and Cochrane databases was performed for papers relevant to robotic assistance of pedicle screw placement. Inclusion criteria were constituted by English written randomized control trials, prospective and retrospective cohort studies involving robotic instrumentation in the spine. Following abstract, title, and full-text review, 32 articles were selected for study inclusion. Intrapedicular accuracy in screw placement and subsequent complications were at least comparable if not superior in the robotic surgery cohort. There is evidence supporting that total operative time is prolonged in robot assisted surgery compared to conventional free-hand. Radiation exposure appeared to be variable between studies; radiation time did decrease in the robot arm as the total number of robotic cases ascended, suggesting a learning curve effect. Multi-level procedures appeared to tend toward earlier discharge in patients undergoing robotic spine surgery. The implementation of robotic technology for pedicle screw placement yields an acceptable level of accuracy on a highly consistent basis. Surgeons should remain vigilant about confirmation of robotic assisted screw trajectory, as drilling pathways have been shown to be altered by soft tissue pressures, forceful surgical application, and bony surface skiving. However, the effective

  16. A natural-language interface to a mobile robot

    Science.gov (United States)

    Michalowski, S.; Crangle, C.; Liang, L.

    1987-01-01

    The present work on robot instructability is based on an ongoing effort to apply modern manipulation technology to serve the needs of the handicapped. The Stanford/VA Robotic Aid is a mobile manipulation system that is being developed to assist severely disabled persons (quadriplegics) in performing simple activities of everyday living in a homelike, unstructured environment. It consists of two major components: a nine degree-of-freedom manipulator and a stationary control console. In the work presented here, only the motions of the Robotic Aid's omnidirectional motion base have been considered, i.e., the six degrees of freedom of the arm and gripper have been ignored. The goal has been to develop some basic software tools for commanding the robot's motions in an enclosed room containing a few objects such as tables, chairs, and rugs. In the present work, the environmental model takes the form of a two-dimensional map with objects represented by polygons. Admittedly, such a highly simplified scheme bears little resemblance to the elaborate cognitive models of reality that are used in normal human discourse. In particular, the polygonal model is given a priori and does not contain any perceptual elements: there is no polygon sensor on board the mobile robot.

  17. Design and Preliminary Feasibility Study of a Soft Robotic Glove for Hand Function Assistance in Stroke Survivors

    Directory of Open Access Journals (Sweden)

    Hong Kai Yap

    2017-10-01

    Full Text Available Various robotic exoskeletons have been proposed for hand function assistance during activities of daily living (ADL of stroke survivors. However, traditional exoskeletons involve the use of complex rigid systems that impede the natural movement of joints, and thus reduce the wearability and cause discomfort to the user. The objective of this paper is to design and evaluate a soft robotic glove that is able to provide hand function assistance using fabric-reinforced soft pneumatic actuators. These actuators are made of silicone rubber which has an elastic modulus similar to human tissues. Thus, they are intrinsically soft and compliant. Upon air pressurization, they are able to support finger range of motion (ROM and generate the desired actuation of the finger joints. In this work, the soft actuators were characterized in terms of their blocked tip force, normal and frictional grip force outputs. Combining the soft actuators and flexible textile materials, a soft robotic glove was developed for grasping assistance during ADL for stroke survivors. The glove was evaluated on five healthy participants for its assisted ROM and grip strength. Pilot test was performed in two stroke survivors to evaluate the efficacy of the glove in assisting functional grasping activities. Our results demonstrated that the actuators designed in this study could generate desired force output at a low air pressure. The glove had a high kinematic transparency and did not affect the active ROM of the finger joints when it was being worn by the participants. With the assistance of the glove, the participants were able to perform grasping actions with sufficient assisted ROM and grip strength, without any voluntary effort. Additionally, pilot test on stroke survivors demonstrated that the patient's grasping performance improved with the presence and assistance of the glove. Patient feedback questionnaires also showed high level of patient satisfaction and comfort. In

  18. Quality-of-life change associated with robotic-assisted therapy to improve hand motor function in patients with subacute stroke: a randomized clinical trial.

    Science.gov (United States)

    Kutner, Nancy G; Zhang, Rebecca; Butler, Andrew J; Wolf, Steven L; Alberts, Jay L

    2010-04-01

    At 6 months poststroke, most patients cannot incorporate their affected hand into daily activities, which in turn is likely to reduce their perceived quality of life. This preliminary study explored change in patient-reported, health-related quality of life associated with robotic-assisted therapy combined with reduced therapist-supervised training. A single-blind, multi-site, randomized clinical trial was conducted. Seventeen individuals who were 3 to 9 months poststroke participated. Sixty hours of therapist-supervised repetitive task practice (RTP) was compared with 30 hours of RTP combined with 30 hours of robotic-assisted therapy. Participants completed the Stroke Impact Scale (SIS) at baseline, immediately postintervention, and 2 months postintervention. Change in SIS score domains was assessed in a mixed model analysis. The combined therapy group had a greater increase in rating of mood from preintervention to postintervention, and the RTP-only group had a greater increase in rating of social participation from preintervention to follow-up. Both groups had statistically significant improvement in activities of daily living and instrumental activities of daily living scores from preintervention to postintervention. Both groups reported significant improvement in hand function postintervention and at follow-up, and the magnitude of these changes suggested clinical significance. The combined therapy group had significant improvements in stroke recovery rating postintervention and at follow-up, which appeared clinically significant; this also was true for stroke recovery rating from preintervention to follow-up in the RTP-only group. LIMITATIONS OUTCOMES: of 30 hours of RTP in the absence of robotic-assisted therapy remain unknown. Robotic-assisted therapy may be an effective alternative or adjunct to the delivery of intensive task practice interventions to enhance hand function recovery in patients with stroke.

  19. Designing a hands-on brain computer interface laboratory course.

    Science.gov (United States)

    Khalighinejad, Bahar; Long, Laura Kathleen; Mesgarani, Nima

    2016-08-01

    Devices and systems that interact with the brain have become a growing field of research and development in recent years. Engineering students are well positioned to contribute to both hardware development and signal analysis techniques in this field. However, this area has been left out of most engineering curricula. We developed an electroencephalography (EEG) based brain computer interface (BCI) laboratory course to educate students through hands-on experiments. The course is offered jointly by the Biomedical Engineering, Electrical Engineering, and Computer Science Departments of Columbia University in the City of New York and is open to senior undergraduate and graduate students. The course provides an effective introduction to the experimental design, neuroscience concepts, data analysis techniques, and technical skills required in the field of BCI.

  20. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach.

    Science.gov (United States)

    Lukic, Luka; Santos-Victor, José; Billard, Aude

    2014-04-01

    We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.

  1. Long-term knowledge acquisition using contextual information in a memory-inspired robot architecture

    Science.gov (United States)

    Pratama, Ferdian; Mastrogiovanni, Fulvio; Lee, Soon Geul; Chong, Nak Young

    2017-03-01

    In this paper, we present a novel cognitive framework allowing a robot to form memories of relevant traits of its perceptions and to recall them when necessary. The framework is based on two main principles: on the one hand, we propose an architecture inspired by current knowledge in human memory organisation; on the other hand, we integrate such an architecture with the notion of context, which is used to modulate the knowledge acquisition process when consolidating memories and forming new ones, as well as with the notion of familiarity, which is employed to retrieve proper memories given relevant cues. Although much research has been carried out, which exploits Machine Learning approaches to provide robots with internal models of their environment (including objects and occurring events therein), we argue that such approaches may not be the right direction to follow if a long-term, continuous knowledge acquisition is to be achieved. As a case study scenario, we focus on both robot-environment and human-robot interaction processes. In case of robot-environment interaction, a robot performs pick and place movements using the objects in the workspace, at the same time observing their displacement on a table in front of it, and progressively forms memories defined as relevant cues (e.g. colour, shape or relative position) in a context-aware fashion. As far as human-robot interaction is concerned, the robot can recall specific snapshots representing past events using both sensory information and contextual cues upon request by humans.

  2. Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

    Science.gov (United States)

    Meng, Qinggang; Lee, M. H.

    2007-03-01

    Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.

  3. Calibration of Robot Reference Frames for Enhanced Robot Positioning Accuracy

    OpenAIRE

    Cheng, Frank Shaopeng

    2008-01-01

    This chapter discussed the importance and methods of conducting robot workcell calibration for enhancing the accuracy of the robot TCP positions in industrial robot applications. It shows that the robot frame transformations define the robot geometric parameters such as joint position variables, link dimensions, and joint offsets in an industrial robot system. The D-H representation allows the robot designer to model the robot motion geometry with the four standard D-H parameters. The robot k...

  4. The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.

    Science.gov (United States)

    Arnold, Thomas; Scheutz, Matthias

    2017-06-01

    Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.

  5. Sensing and Force-Feedback Exoskeleton (SAFE) Robotic Glove.

    Science.gov (United States)

    Ben-Tzvi, Pinhas; Ma, Zhou

    2015-11-01

    This paper presents the design, implementation and experimental validation of a novel robotic haptic exoskeleton device to measure the user's hand motion and assist hand motion while remaining portable and lightweight. The device consists of a five-finger mechanism actuated with miniature DC motors through antagonistically routed cables at each finger, which act as both active and passive force actuators. The SAFE Glove is a wireless and self-contained mechatronic system that mounts over the dorsum of a bare hand and provides haptic force feedback to each finger. The glove is adaptable to a wide variety of finger sizes without constraining the range of motion. This makes it possible to accurately and comfortably track the complex motion of the finger and thumb joints associated with common movements of hand functions, including grip and release patterns. The glove can be wirelessly linked to a computer for displaying and recording the hand status through 3D Graphical User Interface (GUI) in real-time. The experimental results demonstrate that the SAFE Glove is capable of reliably modeling hand kinematics, measuring finger motion and assisting hand grasping motion. Simulation and experimental results show the potential of the proposed system in rehabilitation therapy and virtual reality applications.

  6. Robotized transcranial magnetic stimulation

    CERN Document Server

    Richter, Lars

    2014-01-01

    Presents new, cutting-edge algorithms for robot/camera calibration, sensor fusion and sensor calibration Explores the main challenges for accurate coil positioning, such as head motion, and outlines how active robotic motion compensation can outperform hand-held solutions Analyzes how a robotized system in medicine can alleviate concerns with a patient's safety, and presents a novel fault-tolerant algorithm (FTA) sensor for system safety

  7. Open core control software for surgical robots.

    Science.gov (United States)

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several

  8. Multi-Locomotion Robotic Systems New Concepts of Bio-inspired Robotics

    CERN Document Server

    Fukuda, Toshio; Sekiyama, Kosuke; Aoyama, Tadayoshi

    2012-01-01

    Nowadays, multiple attention have been paid on a robot working in the human living environment, such as in the field of medical, welfare, entertainment and so on. Various types of researches are being conducted actively in a variety of fields such as artificial intelligence, cognitive engineering, sensor- technology, interfaces and motion control. In the future, it is expected to realize super high functional human-like robot by integrating technologies in various fields including these types of researches. The book represents new developments and advances in the field of bio-inspired robotics research introducing the state of the art, the idea of multi-locomotion robotic system to implement the diversity of animal motion. It covers theoretical and computational aspects of Passive Dynamic Autonomous Control (PDAC), robot motion control, multi legged walking and climbing as well as brachiation focusing concrete robot systems, components and applications. In addition, gorilla type robot systems are described as...

  9. The Human-Robot Interaction Operating System

    Science.gov (United States)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  10. Implementing real-time robotic systems using CHIMERA II

    Science.gov (United States)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1990-01-01

    A description is given of the CHIMERA II programming environment and operating system, which was developed for implementing real-time robotic systems. Sensor-based robotic systems contain both general- and special-purpose hardware, and thus the development of applications tends to be a time-consuming task. The CHIMERA II environment is designed to reduce the development time by providing a convenient software interface between the hardware and the user. CHIMERA II supports flexible hardware configurations which are based on one or more VME-backplanes. All communication across multiple processors is transparent to the user through an extensive set of interprocessor communication primitives. CHIMERA II also provides a high-performance real-time kernel which supports both deadline and highest-priority-first scheduling. The flexibility of CHIMERA II allows hierarchical models for robot control, such as NASREM, to be implemented with minimal programming time and effort.

  11. Prototype of Remote Controlled Robot Vehicle to Scan Radioactive Contaminated Areas

    International Nuclear Information System (INIS)

    Ratongasoandrazana, J.B.; Raoelina Andriambololona; Rambolamanana, G.; Andrianiaina, H.; Rajaobelison, J.

    2016-01-01

    The ionizing radiations are not directly audible by the organs of sense of the human being. Maintenance and handling of sources of such ionizing radiations present some risks of very serious and often irreversible accident for human organism. The works of experimentation and maintenance in such zone also present the risks requiring some minimum of precaution. Thus, the main objective of this work is to design and develop (hard- and software) a prototype of educational semi-autonomous Radio Frequency controlled robot-vehicle based on 8-bit AVR-RISC Flash microcontroller system (ATmega128L) able to detect, identify and map the radioactive contaminated area. An integrated video camera coupled with a UHF video transmitter module, placed in front of the robot, will be used as visual feedback control to well direct it toward a precise place to reach. The navigation information and the data collected are transmitted from the robot toward the Computer via 02 Radio Frequency Transceivers for peer-to-peer serial data transfer in half-duplex mode. A Joystick module which is connected to the Computer parallel port allows full motion control of the platform. Robot-vehicle user interface program for the PC has been designed to allow full control of all functions of the robot vehicles.

  12. Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw

    Science.gov (United States)

    Angelakis, D.; Zoumis, S.; Asvestas, P.

    2017-11-01

    The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.

  13. Grasping in Robotics

    CERN Document Server

    2013-01-01

    Grasping in Robotics contains original contributions in the field of grasping in robotics with a broad multidisciplinary approach. This gives the possibility of addressing all the major issues related to robotized grasping, including milestones in grasping through the centuries, mechanical design issues, control issues, modelling achievements and issues, formulations and software for simulation purposes, sensors and vision integration, applications in industrial field and non-conventional applications (including service robotics and agriculture).   The contributors to this book are experts in their own diverse and wide ranging fields. This multidisciplinary approach can help make Grasping in Robotics of interest to a very wide audience. In particular, it can be a useful reference book for researchers, students and users in the wide field of grasping in robotics from many different disciplines including mechanical design, hardware design, control design, user interfaces, modelling, simulation, sensors and hum...

  14. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics

    Directory of Open Access Journals (Sweden)

    Héctor Herrero

    2017-05-01

    Full Text Available This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques.

  15. Interface robotics in nuclear emergencies

    International Nuclear Information System (INIS)

    Ruiz Mungia, E.

    1998-01-01

    The area between the reactor building and the external wall of a nuclear power station could be affected in case of a severe accident with repercussion in the outside. The article describes a series of robotics machines which could be used for building recognition, transmission improvement, civil works and for the making of a radiologic cartography in this area. (Author)

  16. Task Refinement for Autonomous Robots using Complementary Corrective Human Feedback

    Directory of Open Access Journals (Sweden)

    Cetin Mericli

    2011-06-01

    Full Text Available A robot can perform a given task through a policy that maps its sensed state to appropriate actions. We assume that a hand-coded controller can achieve such a mapping only for the basic cases of the task. Refining the controller becomes harder and gets more tedious and error prone as the complexity of the task increases. In this paper, we present a new learning from demonstration approach to improve the robot's performance through the use of corrective human feedback as a complement to an existing hand-coded algorithm. The human teacher observes the robot as it performs the task using the hand-coded algorithm and takes over the control to correct the behavior when the robot selects a wrong action to be executed. Corrections are captured as new state-action pairs and the default controller output is replaced by the demonstrated corrections during autonomous execution when the current state of the robot is decided to be similar to a previously corrected state in the correction database. The proposed approach is applied to a complex ball dribbling task performed against stationary defender robots in a robot soccer scenario, where physical Aldebaran Nao humanoid robots are used. The results of our experiments show an improvement in the robot's performance when the default hand-coded controller is augmented with corrective human demonstration.

  17. Robot modelling; Control and applications with software

    Energy Technology Data Exchange (ETDEWEB)

    Ranky, P G; Ho, C Y

    1985-01-01

    This book provides a ''picture'' of robotics covering both the theoretical aspect of modeling as well as the practical and design aspects of: robot programming; robot tooling and automated hand changing; implementation planning; testing; and software design for robot systems. The authors present an introduction to robotics with a systems approach. They describe not only the tasks relating to a single robot (or arm) but also systems of robots working together on a product or several products.

  18. Robotic Exoskeleton Hand with Pneumatic Actuators

    OpenAIRE

    Pinto, Hugo Miguel Mantas Costa

    2017-01-01

    With modern developments of smart portable devices and miniaturization of technologies, society has been provided with computerized assistance for almost every daily activity but the physical aspects have been frequently ne-glected. It is currently possible to make robots that process information thru neural networks, that identify and mimic facial expressions and that replace manual labour in assembly plants, getting ever closer to skills associated to human beings. In spite of these technol...

  19. Multi-Robot Assembly Strategies and Metrics

    Science.gov (United States)

    MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE

    2018-01-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234

  20. Multi-Robot Assembly Strategies and Metrics.

    Science.gov (United States)

    Marvel, Jeremy A; Bostelman, Roger; Falco, Joe

    2018-02-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.