WorldWideScience

Sample records for human collaborative robot

  1. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots.......-O-bot, a large service robot, in a medical measurement scenario, we compare the timing of the robot's behaviors in three between-subject conditions. The results show that the relative timing of robot behaviors has significant effects on the number of problems participants encounter, and that the robot's verbal...

  2. An Integrated Framework for Human-Robot Collaborative Manipulation.

    Science.gov (United States)

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  3. Essential technologies for developing human and robot collaborative system

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Nobuyuki; Suzuki, Katsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-10-01

    In this study, we aim to develop a concept of new robot system, i.e., `human and robot collaborative system`, for the patrol of nuclear power plants. This paper deals with the two essential technologies developed for the system. One is the autonomous navigation program with human intervention function which is indispensable for human and robot collaboration. The other is the position estimation method by using gyroscope and TV image to make the estimation accuracy much higher for safe navigation. Feasibility of the position estimation method is evaluated by experiment and numerical simulation. (author)

  4. Humanoid Robot RH-1 for Collaborative Tasks: A Control Architecture for Human-Robot Cooperation

    Directory of Open Access Journals (Sweden)

    Concepción A. Monje

    2008-01-01

    Full Text Available The full-scale humanoid robot RH-1 has been totally developed in the University Carlos III of Madrid. In this paper we present an advanced control system for this robot so that it can perform tasks in cooperation with humans. The collaborative tasks are carried out in a semi-autonomous way and are intended to be put into operation in real working environments where humans and robots should share the same space. Before presenting the control strategy, the kinematic model and a simplified dynamic model of the robot are presented. All the models and algorithms are verified by several simulations and experimental results.

  5. Trends in control and decision-making for human-robot collaboration systems

    CERN Document Server

    Zhang, Fumin

    2017-01-01

    This book provides an overview of recent research developments in the automation and control of robotic systems that collaborate with humans. A measure of human collaboration being necessary for the optimal operation of any robotic system, the contributors exploit a broad selection of such systems to demonstrate the importance of the subject, particularly where the environment is prone to uncertainty or complexity. They show how such human strengths as high-level decision-making, flexibility, and dexterity can be combined with robotic precision, and ability to perform task repetitively or in a dangerous environment. The book focuses on quantitative methods and control design for guaranteed robot performance and balanced human experience. Its contributions develop and expand upon material presented at various international conferences. They are organized into three parts covering: one-human–one-robot collaboration; one-human–multiple-robot collaboration; and human–swarm collaboration. Individual topic ar...

  6. Collaborative Robotics Design Considerations

    Science.gov (United States)

    2004-05-06

    in Multirobot Systems," IEEE Transactions on Robotics and Automation, Vol. 18, No. 5, October 2002 [3] Batavia, P., "A Survey of Collaborative... Transactions on Robotics and Automation, Vol. 18, No.5, Oct. 2002, pp 781-795. [21]Scholtz, J.C., "Human-Robot Interactions: Creating Synergistic Cyber...Parker, L.E, eds., Kluwer Academic Publishers, 2002, pp 185-193. [20] Roumeliotis, S., Bekey, G.A., "Distributed Multirobot Localization," IEEE

  7. Spatial Representation and Reasoning for Human-Robot Collaboration

    Science.gov (United States)

    2007-07-01

    gesture recognition system. Robot Hardware The robot is a commercial iRobot B21r. It is an upright cylinder with a zero-turn-radius drive system and...laser rangefinder. In addition, a high-fidelity stereo camera system was added to allow for gesture recognition . The robot’s mobility capabilities...interaction with the human team member may be based solely on gestures. Gesture Recognition To maintain the covert nature of StealthBot

  8. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-03-01

    Full Text Available NASA's vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  9. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-11-01

    Full Text Available NASA?s vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  10. An Augmented Discrete-Time Approach for Human-Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Peidong Liang

    2016-01-01

    Full Text Available Human-robot collaboration (HRC is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios.

  11. A Proactive Strategy for Safe Human-Robot Collaboration based on a Simplified Risk Analysis

    Directory of Open Access Journals (Sweden)

    Audun Sanderud

    2015-01-01

    Full Text Available In an increasing demand for human-robot collaboration systems, the need for safe robots is crucial. This paper presents a proactive strategy to enable an awareness of the current risk for the robot. The awareness is based upon a map of historically occupied space by the operator. The map is built based on a risk evaluation of each pose presented by the operator. The risk evaluation results in a risk field that can be used to evaluate the risk of a collaborative task. Based on this risk field, a control algorithm that constantly reduces the current risk within its task constraints was developed. Kinematic redundancy was exploited for simultaneous task performance within task constraints, and risk minimization. Sphere-based geometric models were used both for the human and robot. The strategy was tested in simulation, and implemented and experimentally tested on a NACHI MR20 7-axes industrial robot.

  12. Learning Controllers for Reactive and Proactive Behaviors in Human-Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Sylvain eCalinon

    2016-06-01

    Full Text Available Designed to safely share the same workspace as humans and assist them in a variety of tasks, the new collaborative robots are targeting manufacturing and service applications that once were considered unattainable. The large diversity of tasks to carry out, the unstructured environments and the close interaction with humans call for collaborative robots to seamlessly adapt their behaviors so as to cooperate with the users successfully under different and possibly new situations (characterized, for example, by positions of objects/landmarks in the environment, or by the user pose. This paper investigates how controllers capable of reactive and proactive behaviors in collaborative tasks can be learned from demonstrations. The proposed approach exploits the temporal coherence and dynamic characteristics of the task observed during the training phase to build a probabilistic model that enables the robot to both react to the user actions and lead the task when needed. The method is an extension of the Hidden Semi-Markov Model where the duration probability distribution is adapted according to the interaction with the user. This Adaptive Duration Hidden Semi-Markov Model (ADHSMM is used to retrieve a sequence of states governing a trajectory optimization that provides the reference and gain matrices to the robot controller. A proof-of-concept evaluation is first carried out in a pouring task. The proposed framework is then tested in a collaborative task using a 7 DOF backdrivable manipulator.

  13. Attributing Agency to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility-Loci.

    Science.gov (United States)

    Nyholm, Sven

    2017-07-18

    Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human-robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

  14. Affective collaborative robots for safety & crisis management in the field

    NARCIS (Netherlands)

    Looije, R.; Neerincx, M.A.; Kruijff, G.J.M.

    2007-01-01

    The lack of human-robot collaboration currently presents a bottleneck to widespread use of robots in urban search & rescue (USAR) missions. The paper argues that an important aspect of realizing human-robot collaboration is collaborative control, and the recognition and expression of affect.

  15. Affective collaborative robots for safety & crisis management in the field

    NARCIS (Netherlands)

    Looije, R.; Neerincx, M.A.; Kruijff, G.J.M.

    2007-01-01

    The lack of human-robot collaboration currently presents a bottleneck to widespread use of robots in urban search & rescue (USAR) missions. The paper argues that an important aspect of realizing human-robot collaboration is collaborative control, and the recognition and expression of affect. Affecti

  16. RELEASING THE SYNERGY OF HUMAN-ROBOT COLLABORATION - REDUNDANT ROBOTICS IN PRACTICE

    National Research Council Canada - National Science Library

    Audun Rønning Sanderud; Trygve Thomessen

    2014-01-01

      Processes that involved work pieces with complex geometry that require accurate processing with heavy tools are at best cumbersome to automate and the long term effects of having a human operator do...

  17. Distributed, collaborative human-robotic networks for outdoor experiments in search, identify and track

    Science.gov (United States)

    Lee, Daniel; McClelland, Mark; Schneider, Joseph; Yang, Tsung-Lin; Gallagher, Dan; Wang, John; Shah, Danelle; Ahmed, Nisar; Moran, Pete; Jones, Brandon; Leung, Tung-Sing; Nathan, Aaron; Kress-Gazit, Hadas; Campbell, Mark

    2010-10-01

    This paper presents an overview of a human-robotic system under development at Cornell which is capable of mapping an unknown environment, as well as discovering, tracking, and neutralizing several static and dynamic objects of interest. In addition, the robots can coordinate their individual tasks with one another without overly burdening a human operator. The testbed utilizes the Segway RMP platform, with lidar, vision, IMU and GPS sensors. The software draws from autonomous systems research, specifically in the areas of pose estimation, target detection and tracking, motion and behavioral planning, and human robot interaction. This paper also details experimental scenarios of mapping, tracking, and neutralization presented by way of pictures, data, and movies.

  18. Collaborative Composition For Musical Robots

    Directory of Open Access Journals (Sweden)

    Ajay Kapur

    2009-05-01

    Full Text Available The goal of this research is to collaborate with a number of different artists to explore the capabilities of robotic musical instruments to cultivate new music. This paper describes the challenges faced in using musical robotics in rehearsals and on the performance stage. It also describes the design of custom software frameworks and tools for the variety of composers and performers interacting with the new instruments. Details of how laboratory experiments and rehearsals moved to the concert hall in a variety of diverse performance scenarios are described. Finally, a paradigm for how to teach musical robotics as a multimedia composition course is discussed.

  19. Skill Based Instruction of Collaborative Robots in Industrial Settings

    DEFF Research Database (Denmark)

    Schou, Casper; Andersen, Rasmus Skovgaard; Chrysostomou, Dimitrios

    2016-01-01

    During the past decades increasing need for more flexible and agile manufacturing equipment has spawned a growing interest in collaborative robots. Contrary to traditional industrial robots, collaborative robots are intended for operating alongside the production personnel in dynamic or semi......-structured human environments. To cope with the environment and workflow of humans, new programming and control methods are needed compared to those of traditional industrial robots. This paper presents a task level programming software tool allowing robotic novices to program industrial tasks on a collaborative...... robot. The tool called Skill Based System (SBS) is founded on the concept of robot skills, which are parameterizeable and task related actions of the robot. Task programming is conducted by first sequencing skills followed by an online parameterization performed using kinesthetic teaching. Through...

  20. Easy Reconfiguration of Modular Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper

    2016-01-01

    Collaborative robots have gained a high interest in both research and industry over the past decade as a response to the need for more flexible and agile manufacturing equipment. In contrast to traditional industrial robots, collaborative robots are not isolated by fences, but work alongside...... the production staff collaborating to perform common tasks. This change of environment imposes a much more dynamic lifecycle for the robot which consequently requires new ways of interacting. This thesis investigates how the changeover to a new task on a collaborative robot can be performed by the shop floor...... operators already working alongside the robot. To effectively perform this changeover, the operator must both reconfigure the hardware of the robot and reprogram the robot to match the new task. To enable shop floor operators to quickly and intuitively program the robot, this thesis proposes the use...

  1. Natural multimodal communication for human–robot collaboration

    National Research Council Canada - National Science Library

    Maurtua, Iñaki; Fernández, Izaskun; Tellaeche, Alberto; Kildal, Johan; Susperregi, Loreto; Ibarguren, Aitor; Sierra, Basilio

    2017-01-01

    This article presents a semantic approach for multimodal interaction between humans and industrial robots to enhance the dependability and naturalness of the collaboration between them in real industrial settings...

  2. Adaptive collaborative control of highly redundant robots

    Science.gov (United States)

    Handelman, David A.

    2008-04-01

    The agility and adaptability of biological systems are worthwhile goals for next-generation unmanned ground vehicles. Management of the requisite number of degrees of freedom, however, remains a challenge, as does the ability of an operator to transfer behavioral intent from human to robot. This paper reviews American Android research funded by NASA, DARPA, and the U.S. Army that attempts to address these issues. Limb coordination technology, an iterative form of inverse kinematics, provides a fundamental ability to control balance and posture independently in highly redundant systems. Goal positions and orientations of distal points of the robot skeleton, such as the hands and feet of a humanoid robot, become variable constraints, as does center-of-gravity position. Behaviors utilize these goals to synthesize full-body motion. Biped walking, crawling and grasping are illustrated, and behavior parameterization, layering and portability are discussed. Robotic skill acquisition enables a show-and-tell approach to behavior modification. Declarative rules built verbally by an operator in the field define nominal task plans, and neural networks trained with verbal, manual and visual signals provide additional behavior shaping. Anticipated benefits of the resultant adaptive collaborative controller for unmanned ground vehicles include increased robot autonomy, reduced operator workload and reduced operator training and skill requirements.

  3. Collaborative robotic team design and integration

    Science.gov (United States)

    Spofford, John R.; Anhalt, David J.; Herron, Jennifer B.; Lapin, Brett D.

    2000-07-01

    Teams of heterogeneous mobile robots are a key aspect of future unmanned systems for operations in complex and dynamic urban environments, such as that envisions by DARPA's Tactical Mobile Robotics program. Interactions among such team members enable a variety of mission roles beyond those achievable with single robots or homogeneous teams. Key technologies include docking for power and data transfer, marsupial transport and deployment, collaborative team user interface, cooperative obstacle negotiation, distributed sensing, and peer inspection. This paper describes recent results in the integration and evaluation of component technologies within a collaborative system design. Integration considerations include requirement definition, flexible design management, interface control, and incremental technology integration. Collaborative system requirements are derived from mission objectives and robotic roles, and impact system and individual robot design at several levels. Design management is a challenge in a dynamic environment, with rapid evolution of mission objectives and available technologies. The object-oriented system model approach employed includes both software and hardware object representations to enable on- the-fly system and robot reconfiguration. Controlled interfaces among robots include mechanical, behavioral, communications, and electrical parameters. Technologies are under development by several organizations within the TMR program community. The incremental integration and validation of these within the collaborative system architecture reduces development risk through frequent experimental evaluations. The TMR system configuration includes Packbot-Perceivers, Packbot- Effectors, and Throwbots. Surrogates for these robots are used to validate and refine designs for multi-robot interaction components. Collaborative capability results from recent experimental evaluations are presented.

  4. Robotics for Human Exploration

    Science.gov (United States)

    Fong, Terrence; Deans, Mathew; Bualat, Maria

    2013-01-01

    Robots can do a variety of work to increase the productivity of human explorers. Robots can perform tasks that are tedious, highly repetitive or long-duration. Robots can perform precursor tasks, such as reconnaissance, which help prepare for future human activity. Robots can work in support of astronauts, assisting or performing tasks in parallel. Robots can also perform "follow-up" work, completing tasks designated or started by humans. In this paper, we summarize the development and testing of robots designed to improve future human exploration of space.

  5. Socially intelligent robots: dimensions of human-robot interaction.

    Science.gov (United States)

    Dautenhahn, Kerstin

    2007-04-29

    Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.

  6. Distributed, Collaborative Human-Robotic Networks for Outdoor Experiments in Search, Identify and Track

    Science.gov (United States)

    2011-01-11

    design 3.3 Computers Each robot is designed to mount two Mini-ITX form factor custom computers. Each computer is equipped with a Core 2 Duo Mobile...curve built from the output from the A* algorithm The planned paths are then fed into a modified vector polar histogram ( VPH ) controller which...provides motor actuation commands to the Segway platform. The VPH controller continuously aims for a look-ahead point on the path a set distance away

  7. Research on Human-Robot Joint System for Lunar Exploration

    Science.gov (United States)

    Zhang, Wei

    The lunar exploration in China is in progress. In order to reduce human workload and costs, and conduct researches more effectively and efficiently, human-robot joint systems are necessary for lunar exploration. The concept of human-robot joint system for lunar exploration is studied in this paper. The possible collaborative ways between human and robots and the collaborative activities which can be conducted for lunar exploration are discussed. Moreover, the preliminary configuration of a human-robot joint system is presented.

  8. Human-robot collaboration of mobile robots for aged and disabled assistance: cognition modelling and application%移动作业助老助残服务机器人人机协作:认知建模及其应用

    Institute of Scientific and Technical Information of China (English)

    屠大维; 江济良; 许烁; 郭帅; 何永义; 谈士力; 方明伦

    2012-01-01

    The work aims to search for a new method of human-robot collaboration ( HRC ) for the application of aged and disabled assistance.Different from previous human-robot studies focusing on integrating the human decision-making intelligence by qualitative judgment with the robots' reasoning intelligence by quantitative calculation, this study gave a new philosophy for HRC, namely, adopting a semantic web of cognitive reasoning to promote human-robot interaction (HRI), constructing a cognitive HRC model by taking reference from the adaptive control of thought-rational (ACT-R) human cognitive architecture, and realizing the human-robot intelligence integration (HRII) by the mutual encouragement, connection and integration of the functional modules of human, robot, perception, HRI and human-robot coupling, etc.Its technical feasibility was validated by experiment.Although this study targets to mobile service robots, it can be extensively used in other types of service robots like smart rehabilitation beds, wheelchairs and cleaning equipments, etc.%进行了寻求移动作业助老助残服务机器人人机协作新途径的研究.与以往人机系统研究强调将人的定性判断决策智能与机器的定量计算推理智能相结合的人机任务分工不同,该研究提出了一种新的人机协作实现途径,即采用认知推理语义网络推动人机交互进程,借鉴理性思维适应性控制(ACT-R)认知架构进行人机协作认知建模,通过人、机器、感知、人机交互、人机耦合等功能模块的相互激励、联系与整体综合实现人机智能融合.试验表明,上述人机协作认知架构及技术路径具有可行性.本文研究虽以移动作业服务机器人为例,但对于智能康复床、智能轮椅、智能清洗设备等其它类型的服务机器人也具有普遍意义.

  9. A Plug and Produce Framework for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    Collaborative robots are today ever more interesting in response to the increasing need for agile manufacturing equipment. Contrary to traditional industrial robots, collaborative robots are intended for working in dynamic environments alongside the production staff. To cope with the dynamic...... environment and workflow, new configuration and control methods are needed compared to those of traditional industrial robots. The new methods should enable shop floor operators to reconfigure the robot. This paper presents a plug and produce framework for industrial collaborative robots. The paper focuses...... of the framework through a series of reconfigurations performed on a modular collaborative robot....

  10. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  11. Human-Robot Interaction: Status and Challenges.

    Science.gov (United States)

    Sheridan, Thomas B

    2016-06-01

    The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.

  12. Collaborative path planning for a robotic wheelchair.

    Science.gov (United States)

    Zeng, Qiang; Teo, Chee Leong; Rebsamen, Brice; Burdet, Etienne

    2008-11-01

    Generating a path to guide a wheelchair's motion faces two challenges. First, the path is located in the human environment and that is usually unstructured and dynamic. Thus, it is difficult to generate a reliable map and plan paths on it by artificial intelligence. Second, the wheelchair, whose task is to carry a human user, should move on a smooth and comfortable path adapted to the user's intentions. To meet these challenges, we propose that the human operator and the robot interact to create and gradually improve a guide path. This paper introduces design tools to enable an intuitive interaction, and reports experiments performed with healthy subjects in order to investigate this collaborative path learning strategy. We analyzed features of the optimal paths and user evaluation in representative conditions. This was complemented by a questionnaire filled out by the subjects after the experiments. The results demonstrate the effectiveness of this approach, and show the utility and complementarity of the tools to design ergonomic guide paths.

  13. Integration of humanoid robots in collaborative working environment: a case study on motion generation

    OpenAIRE

    Stasse, Olivier; Ruland, Rudolf; Lamiraux, Florent; Kheddar, Abderrahmane; Yokoi, Kazuhito; Prinz, Wolfgang

    2009-01-01

    International audience; This paper illustrates through a practical example an integration of a humanoid robotic architecture, with an open-platform collaborative working environment called BSCW (Be Smart-Cooperate Worldwide). BSCW is primarily designed to advocate a futuristic shared workspace system for humans. We exemplify how a complex robotic system (such as a humanoid robot) can be integrated as a proactive collaborative agent which provides services and interacts with other agents shari...

  14. Analysis of human emotion in human-robot interaction

    Science.gov (United States)

    Blar, Noraidah; Jafar, Fairul Azni; Abdullah, Nurhidayu; Muhammad, Mohd Nazrin; Kassim, Anuar Muhamed

    2015-05-01

    There is vast application of robots in human's works such as in industry, hospital, etc. Therefore, it is believed that human and robot can have a good collaboration to achieve an optimum result of work. The objectives of this project is to analyze human-robot collaboration and to understand humans feeling (kansei factors) when dealing with robot that robot should adapt to understand the humans' feeling. Researches currently are exploring in the area of human-robot interaction with the intention to reduce problems that subsist in today's civilization. Study had found that to make a good interaction between human and robot, first it is need to understand the abilities of each. Kansei Engineering in robotic was used to undergo the project. The project experiments were held by distributing questionnaire to students and technician. After that, the questionnaire results were analyzed by using SPSS analysis. Results from the analysis shown that there are five feelings which significant to the human in the human-robot interaction; anxious, fatigue, relaxed, peaceful, and impressed.

  15. Implementing Speed and Separation Monitoring in Collaborative Robot Workcells.

    Science.gov (United States)

    Marvel, Jeremy A; Norcross, Rick

    2017-04-01

    We provide an overview and guidance for the Speed and Separation Monitoring methodology as presented in the International Organization of Standardization's technical specification 15066 on collaborative robot safety. Such functionality is provided by external, intelligent observer systems integrated into a robotic workcell. The SSM minimum protective distance function equation is discussed in detail, with consideration for the input values, implementation specifications, and performance expectations. We provide analytical analyses and test results of the current equation, discuss considerations for implementing SSM in human-occupied environments, and provide directions for technological advancements toward standardization.

  16. Human-Robot Planetary Exploration Teams

    Science.gov (United States)

    Tyree, Kimberly

    2004-01-01

    areas of our research are safety and crew time efficiency. For safety, our work involves enabling humans to reliably communicate with a robot while moving in the same workspace, and enabling robots to monitor and advise humans of potential problems. Voice, gesture, remote computer control, and enhanced robot intelligence are methods we are studying. For crew time efficiency, we are investigating the effects of assigning different roles to humans and robots in collaborative exploration scenarios.

  17. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  18. Learning Controllers for Reactive and Proactive Behaviors in Human–Robot Collaboration

    National Research Council Canada - National Science Library

    Rozo, Leonel; Silvério, João; Calinon, Sylvain; Caldwell, Darwin G

    2016-01-01

    Designed to safely share the same workspace as humans and assist them in a variety of tasks, the new collaborative robots are targeting manufacturing and service applications that once were considered unattainable...

  19. Humans and Robots. Educational Brief.

    Science.gov (United States)

    National Aeronautics and Space Administration, Washington, DC.

    This brief discusses human movement and robotic human movement simulators. The activity for students in grades 5-12 provides a history of robotic movement and includes making an End Effector for the robotic arms used on the Space Shuttle and the International Space Station (ISS). (MVL)

  20. Centralised versus Decentralised Control Reconfiguration for Collaborating Underwater Robots

    DEFF Research Database (Denmark)

    Furno, Lidia; Nielsen, Mikkel Cornelius; Blanke, Mogens

    2015-01-01

    The present paper introduces an approach to fault-tolerant reconfiguration for collaborating underwater robots. Fault-tolerant reconfiguration is obtained using the virtual actuator approach, Steen (2005). The paper investigates properties of a centralised versus a decentralised implementation...... an underwater drill needs to be transported and positioned by three collaborating robots as part of an underwater autonomous operation....... and assesses the capabilities under communication constraints between the individual robots. In the centralised case, each robot sends information related to its own status to a unique virtual actuator that computes the necessary reconfiguration. In the decentralised case, each robot is equipped with its own...

  1. Collaboration Layer for Robots in Mobile Ad-hoc Networks

    DEFF Research Database (Denmark)

    Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e

    2009-01-01

    In many applications multiple robots in Mobile Ad-hoc Networks are required to collaborate in order to solve a task. This paper shows by proof of concept that a Collaboration Layer can be modelled and designed to handle the collaborative communication, which enables robots in small to medium size...... networks to solve tasks collaboratively. In this proposal the Collaboration Layer is modelled to handle service and position discovery, group management, and synchronisation among robots, but the layer is also designed to be extendable. Based on this model of the Collaboration Layer, generic services....... A prototype of the Collaboration Layer has been developed to run in a simulated environment and tested in an evaluation scenario. In the scenario five robots solve the tasks of vacuum cleaning and entrance guarding, which involves the ability to discover potential co-workers, form groups, shift from one group...

  2. A Plug and Produce Framework for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2017-01-01

    into agents and thus supports the software sharing of the robot operating system community. A clear separation of the hardware agents and the higher level task control is achieved through standardization of the functional interface, a standardization maintaining the possibility of specialized function......Collaborative robots are today ever more interesting in response to the increasing need for agile manufacturing equipment. Contrary to traditional industrial robots, collaborative robots are intended for working in dynamic environments alongside the production staff. To cope with the dynamic...... environment and workflow, new configuration and control methods are needed compared to those of traditional industrial robots. The new methods should enable shop floor operators to reconfigure the robot. This article presents a plug and produce framework for industrial collaborative robots. The article...

  3. Human assisted robotic exploration

    Science.gov (United States)

    Files, B. T.; Canady, J.; Warnell, G.; Stump, E.; Nothwang, W. D.; Marathe, A. R.

    2016-05-01

    In support of achieving better performance on autonomous mapping and exploration tasks by incorporating human input, we seek here to first characterize humans' ability to recognize locations from limited visual information. Such a characterization is critical to the design of a human-in-the-loop system faced with deciding whether and when human input is useful. In this work, we develop a novel and practical place-recognition task that presents humans with video clips captured by a navigating ground robot. Using this task, we find experimentally that human performance does not seem to depend on factors such as clip length or familiarity with the scene and also that there is significant variability across subjects. Moreover, we find that humans significantly outperform a state-of-the-art computational solution to this problem, suggesting the utility of incorporating human input in autonomous mapping and exploration techniques.

  4. Cognitive Robotics, Embodied Cognition and Human-Robot Interaction

    Science.gov (United States)

    2010-11-03

    Cognitive Robotics , Embodied Cognition and Human-Robot Interaction Greg Trafton, Ph.D Naval Research Laboratory Wednesday, November 3, 2010 Report...2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Cognitive Robotics , Embodied Cognition and Human-Robot Interaction 5a. CONTRACT...that cognition is for action (embodied cognition) • We are building embodied models for cognitive robotics and human-robot interaction • Online

  5. The SHERPA project: Smart collaboration between humans and ground-aerial robots for improving rescuing activities in alpine environments

    NARCIS (Netherlands)

    Marconi, L.; Melchiorri, C.; Beetz, M.; Pangercic, D.; Siegwart, R.; Leutenegger, S.; Carloni, R.; Stramigioli, S.; Bruyninckx, H.; Doherty, P.; Kleiner, A.; Lippiello, V.; Finzi, A.; Siciliano, B.; Sala, A.; Tomatis, N.

    2012-01-01

    The goal of the paper is to present the foreseen research activity of the European project “SHERPA” whose activities will start officially on February 1th 2013. The goal of SHERPA is to develop a mixed ground and aerial robotic platform to support search and rescue activities in a real-world hostile

  6. The SHERPA project: Smart collaboration between humans and ground-aerial robots for improving rescuing activities in alpine environments

    NARCIS (Netherlands)

    Marconi, L.; Melchiorri, C.; Beetz, M.; Pangercic, D.; Siegwart, R.; Leutenegger, S.; Carloni, Raffaella; Stramigioli, Stefano; Bruyninckx, H.; Doherty, P.; Kleiner, A.; Lippiello, V.; Finzi, A.; Siciliano, B.; Sala, A.; Tomatis, N.

    2012-01-01

    The goal of the paper is to present the foreseen research activity of the European project “SHERPA‿ whose activities will start officially on February 1th 2013. The goal of SHERPA is to develop a mixed ground and aerial robotic platform to support search and rescue activities in a real-world hostile

  7. Self-Organization and Self-Coordination in Welding Automation with Collaborating Teams of Industrial Robots

    Directory of Open Access Journals (Sweden)

    Günther Starke

    2016-11-01

    Full Text Available In welding automation, growing interest can be recognized in applying teams of industrial robots to perform manufacturing processes through collaboration. Although robot teamwork can increase profitability and cost-effectiveness in production, the programming of the robots is still a problem. It is extremely time consuming and requires special expertise in synchronizing the activities of the robots to avoid any collision. Therefore, a research project has been initiated to solve those problems. This paper will present strategies, concepts, and research results in applying robot operating system (ROS and ROS-based solutions to overcome existing technical deficits through the integration of self-organization capabilities, autonomous path planning, and self-coordination of the robots’ work. The new approach should contribute to improving the application of robot teamwork and collaboration in the manufacturing sector at a higher level of flexibility and reduced need for human intervention.

  8. A mechanism of human-robot coupling and collaborative operation for indoor mobile service robots based on a cognitive architecture model%基于认知模型的室内移动服务机器人人机耦合协同作业机制

    Institute of Scientific and Technical Information of China (English)

    江济良; 屠大维; 张国栋; 赵其杰

    2012-01-01

    针对老年人和残疾人这类特殊用户群体与服务机器人构成的人机智能系统,提出了基于ACT-R(理性思维的适应性控制)认知架构模型的室内移动服务机器人人机耦合协同作业机制.基于ACT-R认知架构对人机一体化室内移动服务机器人人机协同作业系统进行了总体设计,利用简单自然的人机效应通道,设计了基于ACT-R认知架构的人机耦合界面;通过人-机-环境空间感知耦合,提出并建立了室内移动服务机器人人机一体化协同决策作业机制.最后在室内环境下进行移动服务机器人人机协同作业实验,系统安全高效地完成了作业任务,验证了该机制的有效性.%For developing a human-machine intelligent system consisting of mobile service robots and special users, such as the elderly and disabled, a mechanism of human-robot coupling and collaborative operation based on an a-daptive control of thought-rational (ACT-R) cognitive architecture model was put forward in this paper. A system of human-robot integration and collaborative operation for indoor mobile service robots was generally designed based on ACT-R cognitive architecture. A human-robot coupling intelligent interface was designed based on ACT-R cognitive architecture in this system through simple natural human-robot interaction modalities. An operation mechanism of human-robot integration and collaborative decision for indoor mobile service robots was proposed and established through human-robot-environment space perception coupling. Finally, an experiment of human-robot coupling and collaborative operation was conducted safely and efficiently in an indoor environment, thus verifying the feasibility of the mechanism.

  9. 人机协作下的机械臂轨迹生成与修正方法%Trajectory Generation and Adjustment Method for Robot Manipulators in Human-Robot Collaboration

    Institute of Scientific and Technical Information of China (English)

    刘维惠; 陈殿生; 张立志

    2016-01-01

    For trajectory planning of service robot manipulators in daily living environment with obstacles, a trajectory generation and adjustment method in human-robot collaboration is proposed. At first, an approach is designed to produce a trajectory with similar shape to the demonstrated one based on dynamic movement primitive (DMP) model. Here, the problem of shape distortion caused by multi-degree-of-freedom coupling is solved by projecting the 3D target point on a plane where the demonstration trajectory falls, then generating the 3D trajectory by using Rodrigues’ rotation formula. Thus the shape character of the produced trajectory can be ensured in all directions. Secondly, the trajectory can be adjusted by inserting interactive points to meet the operation requirements in cases of complex environments with obstacles in different shapes, and then it is smoothened by dual parabolic interpolation algorithm. Lastly, an interactive interface is built in ROS (robot operating system) under the idea of human-robot collaboration. Operators can intuitively help a manipulator to generate and adjust the 3D trajectory of the end-effector in the environment with or without obstacles. Obstacle-avoidance experiments validate the intuitiveness and flexibility of the proposed approach, which can adapt to complex daily-living environment with multiple kinds of obstacles.%针对日常生活中多障碍物环境下的服务机器人机械臂轨迹规划问题,提出了一种人机协作下的轨迹生成与修正方法。首先,基于动态动作基元(DMP)模型,设计了一个能生成与示教轨迹形状相似的路径的方法。该方法针对多自由度耦合产生的轨迹形状畸变问题,通过将3维目标点投影于示教轨迹平面,再利用罗德里格旋转公式生成3维路径,保证了生成轨迹在各个方向都具有较稳定的形状特征。其次,针对存在多种形状障碍物的复杂操作环境,提出了通过插入交互点对

  10. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2010-02-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  11. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized...... engineers. The hard- ware reconfiguration framework adopts a modular architecture for the collabo- rative robot which dictates a clear segmentation of the robot into well-defined exchangeable modules. Four main objectives for the hardware reconfiguration framework; 1) Modular architecture, 2) Module...

  12. Interactions between Humans and Robots

    DEFF Research Database (Denmark)

    Vlachos, Evgenios; Schärfe, Henrik

    2013-01-01

    introduce a generic model for comparing and contrasting robots (CCM), aiming to provide a common platform for robot designers, developers and users. The framework for HRI we propose stems mainly from the vagueness and the lack of clarity that has been observed in the definitions of both Direct and Indirect......Combining multiple scientific disciplines, robotic technology has made significant progress the last decade, and so did the interactions between humans and robots. This article updates the agenda for robotic research by highlighting the factors that affect HumanRobot Interaction (HRI......), and explains the relationships and dependencies that exist between them. The four main factors that define the properties of a robot, and therefore the interaction, are distributed in two dimensions: (1) Intelligence (Control - Autonomy), and (2) Perspective (Tool - Medium). Based on these factors, we...

  13. Interactions between Humans and Robots

    DEFF Research Database (Denmark)

    Vlachos, Evgenios; Schärfe, Henrik

    2013-01-01

    Combining multiple scientific disciplines, robotic technology has made significant progress the last decade, and so did the interactions between humans and robots. This article updates the agenda for robotic research by highlighting the factors that affect HumanRobot Interaction (HRI......), and explains the relationships and dependencies that exist between them. The four main factors that define the properties of a robot, and therefore the interaction, are distributed in two dimensions: (1) Intelligence (Control - Autonomy), and (2) Perspective (Tool - Medium). Based on these factors, we...... introduce a generic model for comparing and contrasting robots (CCM), aiming to provide a common platform for robot designers, developers and users. The framework for HRI we propose stems mainly from the vagueness and the lack of clarity that has been observed in the definitions of both Direct and Indirect...

  14. Coordination Algorithm for Multi Robot Collaboration in Soccer Game

    Directory of Open Access Journals (Sweden)

    Awang Hendrianto Pratomo

    2011-01-01

    Full Text Available Robot Soccer is a rich domain for the study in artificial intelligence. Teams of players must work together in order to put the ball in the opposing goal. Learning is essential in this task since the dynamics of the system can change as the opponents’ behaviours change. The players must be able to adapt to new situations. In this paper, we create a passing, avoiding obstacle and shooting strategy for robot soccer coordination. Based on a scenario in robot soccer, we stimulate a mini case study which involves two robots and ball.This research proposes coordination algorithm for robots collaboration in soccer game. The method is based on role, act, and behaviour of the robots. The actions of each robot depend on the created situation. The simulation result shows its probability to be applied in the real robot soccer game.

  15. Reliability Architecture for Collaborative Robot Control Systems in Complex Environments

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2016-02-01

    Full Text Available Many different kinds of robot systems have been successfully deployed in complex environments, while research into collaborative control systems between different robots, which can be seen as a hybrid internetware safety-critical system, has become essential. This paper discusses ways to construct robust and secure reliability architecture for collaborative robot control systems in complex environments. First, the indication system for evaluating the realtime reliability of hybrid internetware systems is established. Next, a dynamic collaborative reliability model for components of hybrid internetware systems is proposed. Then, a reliable, adaptive and evolutionary computation method for hybrid internetware systems is proposed, and a timing consistency verification solution for collaborative robot control internetware applications is studied. Finally, a multi-level security model supporting dynamic resource allocation is established.

  16. Mobile Robots in Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    Traditionally, robots have been assistant machines in factories and a ubiquitous part of science fiction movies. But within the last decade the robots have started to emerge in everyday human environments. Today they are in our everyday environment in the shape of, for example, vacuum cleaners......, lawn mowers, toy pets, or as assisting technologies for care giving. If we want robots to be an even larger and more integrated part of our every- day environments, they need to become more intelligent, and behave safe and natural to the humans in the environment. This thesis deals with making...... intelligent mobile robotic devices capable of being a more natural and sociable actor in a human environment. More specific the emphasis is on safe and natural motion and navigation issues. First part of the work focus on developing a robotic system, which estimates human interest in interacting...

  17. Additive Manufacturing Cloud via Peer-Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Yuan Yao

    2016-05-01

    Full Text Available When building a 3D printing cloud manufacturing platform, self-sensing and collaboration on manufacturing resources present challenging problems. This paper proposes a peer-robot collaboration framework to deal with these issues. Each robot combines heterogeneous additive manufacturing hardware and software, acting as an intelligent agent. Through collaboration with other robots, it forms a dynamic and scalable integration manufacturing system. The entire distributed system is managed by rules that employ an internal rule engine, which supports rule conversion and conflict resolution. Two additive manufacturing service scenarios are designed to analyse the efficiency and scalability of the framework. Experiments show that the presented method performs well in tasks requiring large-scale access to resources and collaboration.

  18. Additive Manufacturing Cloud via Peer-robot Collaboration

    Directory of Open Access Journals (Sweden)

    Yuan Yao

    2016-05-01

    Full Text Available When building a 3D printing cloud manufacturing platform, self-sensing and collaboration on manufacturing resources present challenging problems. This paper proposes a peer-robot collaboration framework to deal with these issues. Each robot combines heterogeneous additive manufacturing hardware and software, acting as an intelligent agent. Through collaboration with other robots, it forms a dynamic and scalable integration manufacturing system. The entire distributed system is managed by rules that employ an internal rule engine, which supports rule conversion and conflict resolution. Two additive manufacturing service scenarios are designed to analyse the efficiency and scalability of the framework. Experiments show that the presented method performs well in tasks requiring large-scale access to resources and collaboration.

  19. Human-robot collaboration navigation of service robots for the elderly and disabled in an intelligent space%智能空间助老助残服务机器人人机协作导航

    Institute of Scientific and Technical Information of China (English)

    江济良; 屠大维

    2014-01-01

    针对具有基本认知行为能力的行动不便老人、肢残人士、运动和语言患者这类服务对象,构建了助老助残服务机器人人机一体化导航系统。用户与机器人之间进行交互,自由切换随机行走和自主导航2种运动模式,机器人根据现场环境和作业条件的不同,实时触发人机条件响应生成规则而产生相应的行走行为,作业时人机界面同步呈现虚实结合、实时交互的智能空间,实现人机一体化感知、决策与执行。以移动作业服务机器人为对象进行室内人机一体化导航作业,验证了该人机一体化导航系统的可行性。%For the users owning basic cognitive ability, such as the elderly with reduced mobility, physically disa-bled persons, and motion and speech impaired patients, a human-robot integrated navigation system is established for them in this paper.The two motion modes of random walking and autonomous navigation can be switched freely by the user through human-robot interaction.The robot triggers the production rules for generating the corresponding walking behavior according to surroundings and operating conditions.In the process of human-robot roaming, an in-telligent space is rendered synchronously on the interface of human-robot interaction based on augmented reality. Therefore, the human-robot integrated perception, decision and execution are achieved in human-robot roaming. Human-robot roaming experiments for a mobile service robot were carried out in an indoor environment, which veri-fies the feasibility of the proposed system.

  20. The Human-Robot Interaction Operating System

    Science.gov (United States)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  1. 1st AAU Workshop on Human-Centered Robotics

    DEFF Research Database (Denmark)

    interaction among researchers from multiple relevant disciplines in the human-centered robotics, and consequently, to promote collaborations across departments of all faculties towards making our center a center of excellence in robotics. The workshop becomes a great success, with 13 presentations, attracting......The 2012 AAU Workshop on Human-Centered Robotics took place on 15 Nov. 2012, at Aalborg University, Aalborg. The workshop provides a platform for robotics researchers, including professors, PhD and Master students to exchange their ideas and latest results. The objective is to foster closer...... more than 45 participants from AAU, SDU, DTI and industrial companies as well. The proceedings contain 7 full papers selected out from the full papers submitted afterwards on the basis of workshop abstracts. The papers represent major research development of robotics at AAU, including medical robots...

  2. Robots and Humans: Synergy in Planetary Exploration

    Science.gov (United States)

    Landis, Geoffrey A.

    2003-01-01

    How will humans and robots cooperate in future planetary exploration? Are humans and robots fundamentally separate modes of exploration, or can humans and robots work together to synergistically explore the solar system? It is proposed that humans and robots can work together in exploring the planets by use of telerobotic operation to expand the function and usefulness of human explorers, and to extend the range of human exploration to hostile environments.

  3. Robotic Recon for Human Exploration

    Science.gov (United States)

    Deans, Matthew; Fong, Terry; Ford, Ken; Heldmann, Jennifer; Helper, Mark; Hodges, Kip; Landis, Rob; Lee, Pascal; Schaber, Gerald; Schmitt, Harrison H.

    2009-01-01

    Robotic reconnaissance has the potential to significantly improve scientific and technical return from lunar surface exploration. In particular, robotic recon may increase crew productivity and reduce operational risk for exploration. However, additional research, development and field-testing is needed to mature robot and ground control systems, refine operational protocols, and specify detailed requirements. When the new lunar surface campaign begins around 2020, and before permanent outposts are established, humans will initially be on the Moon less than 10% of the time. During the 90% of time between crew visits, robots will be available to perform surface operations under ground control. Understanding how robotic systems can best address surface science needs, therefore, becomes a central issue Prior to surface missions, lunar orbiters (LRO, Kaguya, Chandrayyan-1, etc.) will map the Moon. These orbital missions will provide numerous types of maps: visible photography, topographic, mineralogical and geochemical distributions, etc. However, remote sensing data will not be of sufficient resolution, lighting, nor view angle, to fully optimize pre-human exploration planning, e.g., crew traverses for field geology and geophysics. Thus, it is important to acquire supplemental and complementary surface data. Robotic recon can obtain such data, using robot-mounted instruments to scout the surface and subsurface at resolutions and at viewpoints not achievable from orbit. This data can then be used to select locations for detailed field activity and prioritize targets to improve crew productivity. Surface data can also help identify and assess terrain hazards, and evaluate alternate routes to reduce operational risk. Robotic recon could be done months in advance, or be part of a continuing planning process during human missions.

  4. Robot learning from human teachers

    CERN Document Server

    Chernova, Sonia

    2014-01-01

    Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn f

  5. Pantomimic gestures for human-robot interaction

    CSIR Research Space (South Africa)

    Burke, Michael G

    2015-10-01

    Full Text Available -1 IEEE TRANSACTIONS ON ROBOTICS 1 Pantomimic Gestures for Human-Robot Interaction Michael Burke, Student Member, IEEE, and Joan Lasenby Abstract This work introduces a pantomimic gesture interface, which classifies human hand gestures using...

  6. Delegating responsibilities in human-robot teams

    Science.gov (United States)

    DeKoven, Elyon A. M.; Bechtel, Bob; Zaientz, Jack; Lisse, Sean; Murphy, Anne K. G.

    2006-05-01

    Trends in combat technology research point to an increasing role for uninhabited vehicles and other robotic elements in modern warfare tactics. However, real-time control of multiple uninhabited battlefield robots and other semi-autonomous systems, in diverse fields of operation, is a difficult problem for modern warfighters that, while identified, has not been adequately addressed. Soar Technology is applying software agent technology to simplify demands on the human operator. Our goal is to build intelligent systems capable of finding the best balance of control between the human and autonomous system capabilities. We are developing an Intelligent Control Framework (ICF) from which to create agent-based systems that are able to dynamically delegate responsibilities across multiple robotic assets and the human operator. This paper describes proposed changes to our ICF architecture based on principles of human-machine teamwork derived from collaborative discourse theory. We outline the principles and the new architecture, and give examples of the benefits that can be realized from our approach.

  7. Cooperative Tasks between Humans and Robots in Industrial Environments

    Directory of Open Access Journals (Sweden)

    J. A. Corrales

    2012-09-01

    Full Text Available Collaborative tasks between human operators and robotic manipulators can improve the performance and flexibility of industrial environments. Nevertheless, the safety of humans should always be guaranteed and the behaviour of the robots should be modified when a risk of collision may happen. This paper presents the research that the authors have performed in recent years in order to develop a human‐robot interaction system which guarantees human safety by precisely tracking the complete body of the human and by activating safety strategies when the distance between them is too small. This paper not only summarizes the techniques which have been implemented in order to develop this system, but it also shows its application in three real human‐robot interaction tasks.

  8. Indoor Inter-Robot Distance Measurement in Collaborative Systems

    Directory of Open Access Journals (Sweden)

    FILOTE, C.

    2010-08-01

    Full Text Available This paper focuses on the problem of autonomous distance calculation between multiple mobile robots in collaborative systems. We propose and discuss two distinct methods, specifically developed under important design and functional constraints, such as the speed of operation, accuracy, energy and cost efficiency. Moreover, the methods are designed to be applied to indoor robotic systems and are independent of fixed landmarks. The measurement results, performed on the CORE-TX case study, show that the proposed solutions meet the design requirements previously specified.

  9. Human-robot coordination using scripts

    Science.gov (United States)

    Barnes, Laura E.; Murphy, Robin R.; Craighead, Jeffrey D.

    2006-05-01

    This paper describes an extension of scripts, which have been used to control sequences of robot behavior, to facilitate human-robot coordination. The script mechanism permits the human to both conduct expected, complementary activities with the robot and to intervene opportunistically taking direct control. Scripts address the six major issues associated with human-robot coordination. They allow the human to visualize the robot's mental model of the situation and build a better overall understanding of the situation and what level of autonomy or intervention is needed. It also maintains synchronization of the world and robot models so that control can be seamlessly transferred between human and robot while eliminating "coordination surprise". The extended script mechanism and its implementation in Java on an Inuktun micro-VGTV robot for the technical search task in urban search and rescue is described.

  10. Human-robot Team Coordination That Considers Human Fatigue

    Directory of Open Access Journals (Sweden)

    Kai Zhang

    2014-06-01

    Full Text Available Many applications for robots require them to work alongside people as capable members of human-robot teams and to collaborate in order to perform tasks and achieve common goals. These tasks can induce strain on the human due to time constraints. Additionally, humans can become highly stressed due to fatigue, resulting in decreased efficiency. The contribution of this paper is in the introduction of a human fatigue model and the application of this model to a mixed team coordination framework in order to predict team performance given the constraints of human fatigue. The human fatigue model - namely a FAtigue Prediction (FAP model - is used to conduct numerical simulations that predict mixed team performances. Specifically, extensive simulations are performed to determine how human fatigue influences the choice of the number of agents for a given number of tasks. The novel mixed team coordination framework is a Stochastic Clustering Auction (SCA, which is based on a modification of the Swendsen-Wang method, called SW2 SCA. It enables complex and efficient movement between clusters by connecting tasks that appear to be synergistic and then stochastically reassigning these connected tasks. In SW2 SCA, the auctioneer makes stochastic movements with homogeneous or heterogeneous agents. The final discussion outlines a systematic procedure to predict the performance of human-robot systems with the FAP model in SCA.

  11. Development of a New Backdrivable Actuator for Haptic Interfaces and Collaborative Robots

    Directory of Open Access Journals (Sweden)

    Florian Gosselin

    2016-06-01

    Full Text Available Industrial robots are most often position controlled and insensitive to external forces. In many robotic applications, however, such as teleoperation, haptics for virtual reality, and collaborative robotics, a close cooperation between humans and robots is required. For such applications, force sensing and control capabilities are required for stable interactions with the operator and environment. The robots must also be backdrivable, i.e., the robot must be able to follow user’s induced movements with the least possible resistance. High force efficiency is also desirable. These requirements are different from the design drivers of traditional industrial robots and call for specific actuators and reducers. Many such devices were proposed in the literature. However, they suffer from several drawbacks, offering either a limited reduction ratio or being complex and bulky. This paper introduces a novel solution to this problem. A new differential cable drive reducer is presented. It is backdrivable, has a high efficiency, and a potentially infinite reduction ratio. A prototype actuator using such a reducer has been developed and integrated on a test bench. The experimental characterization of its performance confirms its theoretical advantages.

  12. Safe Human-Robot Cooperation in an Industrial Environment

    Directory of Open Access Journals (Sweden)

    Nicola Pedrocchi

    2013-01-01

    Full Text Available The standard EN ISO10218 is fostering the implementation of hybrid production systems, i.e., production systems characterized by a close relationship among human operators and robots in cooperative tasks. Human-robot hybrid systems could have a big economic benefit in small and medium sized production, even if this new paradigm introduces mandatory, challenging safety aspects. Among various requirements for collaborative workspaces, safety-assurance involves two different application layers; the algorithms enabling safe space-sharing between humans and robots and the enabling technologies allowing acquisition data from sensor fusion and environmental data analysing. This paper addresses both the problems: a collision avoidance strategy allowing on-line re-planning of robot motion and a safe network of unsafe devices as a suggested infrastructure for functional safety achievement.

  13. Human-Like Movement of an Anthropomorphic Robot: Problem Revisited

    Science.gov (United States)

    e Silva, E. Costa; Costa, M. F.; Bicho, E.; Erlhagen, W.

    2011-09-01

    Human-like movement is fundamental for natural human-robot interaction and collaboration. We have developed in a model for generating arm and hand movements an anthropomorphic robot. This model was inspired by the Posture-Based Motion-Planning Model of human reaching and grasping movements. In this paper we present some changes to the model we have proposed in [4] and test and compare different nonlinear constrained optimization techniques for solving the large-scale nonlinear constrained optimization problem that rises from the discretization of our time-continuous model. Furthermore, we test different time discretization steps.

  14. Robotic Follow-Up for Human Exploration

    Science.gov (United States)

    Fong, Terrence; Bualat, Maria; Deans, Matthew C.; Adams, Byron; Allan, Mark; Altobelli, Martha; Bouyssounouse, Xavier; Cohen, Tamar; Flueckiger, Lorenzo; Garber, Joshua; Palmer, Elizabeth; Heggy, Essam; Jurgens, Frank; Kennedy, Tim; Kobayashi, Linda; Lee, Pascal; Lee, Susan Y.; Lees, David; Lundy, Mike; Park, Eric; Pedersen, Liam; Smith, Trey; To, Vinh; Utz, Hans; Wheeler, Dawn

    2010-01-01

    We are studying how "robotic follow-up" can improve future planetary exploration. Robotic follow-up, which we define as augmenting human field work with subsequent robot activity, is a field exploration technique designed to increase human productivity and science return. To better understand the benefits, requirements, limitations and risks associated with this technique, we are conducting analog field tests with human and robot teams at the Haughton Crater impact structure on Devon Island, Canada. In this paper, we discuss the motivation for robotic follow-up, describe the scientific context and system design for our work, and present results and lessons learned from field testing.

  15. Safe Human-Robot Cooperation in an Industrial Environment

    Directory of Open Access Journals (Sweden)

    Nicola Pedrocchi

    2013-01-01

    Full Text Available The standard EN ISO10218 is fostering the implementation of hybrid production systems, i.e., production systems characterized by a close relationship among human operators and robots in cooperative tasks. Human‐robot hybrid systems could have a big economic benefit in small and medium sized production, even if this new paradigm introduces mandatory, challenging safety aspects. Among various requirements for collaborative workspaces, safety‐assurance involves two different application layers; the algorithms enabling safe space‐sharing between humans and robots and the enabling technologies allowing acquisition data from sensor fusion and environmental data analysing. This paper addresses both the problems: a collision avoidance strategy allowing on‐line re‐planning of robot motion and a safe network of unsafe devices as a suggested infrastructure for functional safety achievement.

  16. The ethics of human-robot relationships

    NARCIS (Netherlands)

    Graaf, de Maartje M.A.

    2015-01-01

    Currently, human-robot interactions are constructed according to the rules of human-human interactions inviting users to interact socially with robots. Is there something morally wrong with deceiving humans into thinking they can foster meaningful interactions with a technological object? Or is this

  17. Characterizing the state of the art of human-robot coproduction

    NARCIS (Netherlands)

    Cencen, A.; Verlinden, J.C.; Geraedts, J.M.P.

    2015-01-01

    The industry is working towards manufacturing systems consisting of a blend of humans and robots. We look at the development of these systems in the context of Small and Medium Enterprises (SME). Also, it is believed that industrial robots with collaboration capabilities with humans will play a cruc

  18. Characterizing the state of the art of human-robot coproduction

    NARCIS (Netherlands)

    Cencen, A.; Verlinden, J.C.; Geraedts, J.M.P.

    2015-01-01

    The industry is working towards manufacturing systems consisting of a blend of humans and robots. We look at the development of these systems in the context of Small and Medium Enterprises (SME). Also, it is believed that industrial robots with collaboration capabilities with humans will play a

  19. Towards the Verification of Human-Robot Teams

    Science.gov (United States)

    Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.

    2005-01-01

    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.

  20. A Distributed Tactile Sensor for Intuitive Human-Robot Interfacing

    Directory of Open Access Journals (Sweden)

    Andrea Cirillo

    2017-01-01

    Full Text Available Safety of human-robot physical interaction is enabled not only by suitable robot control strategies but also by suitable sensing technologies. For example, if distributed tactile sensors were available on the robot, they could be used not only to detect unintentional collisions, but also as human-machine interface by enabling a new mode of social interaction with the machine. Starting from their previous works, the authors developed a conformable distributed tactile sensor that can be easily conformed to the different parts of the robot body. Its ability to estimate contact force components and to provide a tactile map with an accurate spatial resolution enables the robot to handle both unintentional collisions in safe human-robot collaboration tasks and intentional touches where the sensor is used as human-machine interface. In this paper, the authors present the characterization of the proposed tactile sensor and they show how it can be also exploited to recognize haptic tactile gestures, by tailoring recognition algorithms, well known in the image processing field, to the case of tactile images. In particular, a set of haptic gestures has been defined to test three recognition algorithms on a group of 20 users. The paper demonstrates how the same sensor originally designed to manage unintentional collisions can be successfully used also as human-machine interface.

  1. Human-Robot Teams for Unknown and Uncertain Environments

    Science.gov (United States)

    Fong, Terry

    2015-01-01

    Man-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence.

  2. Project based, Collaborative, Algorithmic Robotics for High School Students: Programming Self Driving Race Cars at MIT

    Science.gov (United States)

    2017-02-19

    program is an excellent example of the utilization of state-of-the-art robotics equipment. It has also emphasized collaboration and teamwork since...Project-based, Collaborative , Algorithmic Robotics for High School Students: Programming Self-driving Race Cars at MIT Sertac Karaman...implemented in a collaborative fashion: the students learn the basics of collaboration and technical communication in lectures, and they work in teams to

  3. Improving Emergency Response and Human-Robotic Performance

    Energy Technology Data Exchange (ETDEWEB)

    David I. Gertman; David J. Bruemmer; R. Scott Hartley

    2007-08-01

    Preparedness for chemical, biological, and radiological/nuclear incidents at nuclear power plants (NPPs) includes the deployment of well trained emergency response teams. While teams are expected to do well, data from other domains suggests that the timeliness and accuracy associated with incident response can be improved through collaborative human-robotic interaction. Many incident response scenarios call for multiple, complex procedure-based activities performed by personnel wearing cumbersome personal protective equipment (PPE) and operating under high levels of stress and workload. While robotic assistance is postulated to reduce workload and exposure, limitations associated with communications and the robot’s ability to act independently have served to limit reliability and reduce our potential to exploit humanrobotic interaction and efficacy of response. Recent work at the Idaho National Laboratory (INL) on expanding robot capability has the potential to improve human-system response during disaster management and recovery. Specifically, increasing the range of higher level robot behaviors such as autonomous navigation and mapping, evolving new abstractions for sensor and control data, and developing metaphors for operator control have the potential to improve state-of-the-art in incident response. This paper discusses these issues and reports on experiments underway intelligence residing on the robot to enhance emergency response.

  4. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  5. Biological Inspiration in Human Centred Robotics

    Institute of Scientific and Technical Information of China (English)

    HU Huo-sheng; LIU Jin-dong; Calderon Carlos A

    2004-01-01

    Human centred robotics (HCR) concerns with the development of various kinds of intelligent systems and robots that will be used in environments coexisting with humans. These systems and robots will be interactive and useful assistants/companions for people in different ages, situations, activities and environments in order to improve the quality of life. This paper presents the autors' current research work toward the development of advanced theory and technologies for HCR applications, based on inspiration from biological systems. More specifically, both bio-mimetic system modelling and robot learning by imitation are discussed respectively, and some preliminary results are demonstrated.

  6. Social robots from a human perspective

    CERN Document Server

    Taipale, Sakari; Sapio, Bartolomeo; Lugano, Giuseppe; Fortunati, Leopoldina

    2015-01-01

    Addressing several issues that explore the human side of social robots, this book asks from a social and human scientific perspective what a social robot is and how we might come to think about social robots in the different areas of everyday life. Organized around three sections that deal with Perceptions and Attitudes to Social Robots, Human Interaction with Social Robots, and Social Robots in Everyday Life, the book explores the idea that even if technical problems related to robot technologies can be continuously solved from a machine perspective, what kind of machine do we want to have and use in our daily lives? Experiences from previously widely adopted technologies, such smartphones, hint that robot technologies could potentially be absorbed into the everyday lives of humans in such a way that it is the human that determines the human-machine interaction. In a similar way to how today’s information and communication technologies were first designed for professional/industrial use, but which soon wer...

  7. Robotics Collaborative Technology Alliance (RCTA): Technical Exchange Meeting (TEM) 2015

    Science.gov (United States)

    2017-05-01

    using a manipulator or other instrument . Perception: • What can a robot perceive that is beyond human capabilities (e.g. thermal imaging)? o Which...Mobility, Intelligence, and Perception. The TEM achieved the following: a joint understanding of the state of the art of UMRs, identification of...overall objectives for the meeting are listed as follows: 1) Acquire an improved understanding of the state of the art and planned accomplishments within

  8. Joint Human-Robot Action: Virtual Intentionality and Hybrid Human-Robot Cultures

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2009-01-01

    How must we understand joint action between humans and robots? Responding to Knoblich & Sebanz (2008) I ask the question if robots would meet he conditions for joint action prescribed by standard theories. On such accounts, it seems, (present) robots do not have intentions, so it seems only 'assymet

  9. Modeling human operator involvement in robotic systems

    NARCIS (Netherlands)

    Wewerinke, P.H.

    1991-01-01

    A modeling approach is presented to describe complex manned robotic systems. The robotic system is modeled as a (highly) nonlinear, possibly time-varying dynamic system including any time delays in terms of optimal estimation, control and decision theory. The role of the human operator(s) is modeled

  10. Negative Affect in Human Robot Interaction

    DEFF Research Database (Denmark)

    Rehm, Matthias; Krogsager, Anders

    2013-01-01

    The vision of social robotics sees robots moving more and more into unrestricted social environments, where robots interact closely with users in their everyday activities, maybe even establishing relationships with the user over time. In this paper we present a field trial with a robot in a semi......-public place. Our analysis of the interactions with casual users shows that it is not enough to focus on modeling behavior that is similar to successful human interactions but that we have to take more deviant ways of interaction like abuse and impoliteness into account when we send robots into the users......’ environments. The analysis uses impoliteness theory as an analytical toolbox and exemplifies which strategies are employed by users in unexpected encounters with a humanoid robot....

  11. STATICS ANALYSIS AND OPENGL BASED 3D SIMULATION OF COLLABORATIVE RECONFIGURABLE PLANETARY ROBOTS

    Institute of Scientific and Technical Information of China (English)

    Zhang Zheng; Ma Shugen; Li Bin; Zhang Liping; Cao Binggang

    2006-01-01

    Objective To study mechanics characteristics of two cooperative reconfigurable planetary robots when they get across an obstacle, and to find out the relationship between the maximum height of a stair with the configuration of the two-robot, and to find some restrictions of kinematics for the cooperation. Methods Multirobot cooperation theory is used in the whole study process. Inverse kinematics of the robot is used to form a desired configuration in the cooperation process. Static equations are established to analyze the relations between the friction factor, the configuration of robots and the maximum height of a stair. Kinematics analysis is used to find the restrictions of the two collaborative robots in position, velocity and acceleration. Results 3D simulation shows that the two cooperative robots can climb up a stair under the condition of a certain height and a certain friction factor between robot wheel and the surface of the stair. Following the restrictions of kinematics, the climbing mission is fulfilled successfully and smoothly. Conclusion The maximum height of a stair, which the two cooperative robots can climb up, is involved in the configuration of robots, friction factor between the stair and the robots. The most strict restriction of the friction factor does not appear in the horizontal position. In any case, the maximum height is smaller than half of the distance between the centroid of robot1 with the centroid of robot2. However, the height can be higher than the radius of one robot wheel, which profit from the collaboration.

  12. Mobile Robot Collision Avoidance in Human Environments

    OpenAIRE

    Lingqi Zeng; Gary M. Bone

    2013-01-01

    Collision avoidance is a fundamental requirement for mobile robots. Avoiding moving obstacles (also termed dynamic obstacles) with unpredictable direction changes, such as humans, is more challenging than avoiding moving obstacles whose motion can be predicted. Precise information on the future moving directions of humans is unobtainable for use in navigation algorithms. Furthermore, humans should be able to pursue their activities unhindered and without worrying about the robots around them....

  13. The role of roles: physical cooperation between humans and robots

    OpenAIRE

    Küçükyılmaz, Ayşe; Sezgin, Tevfik Metin; Başdoğan, Çağatay; Moertl, Alexander; Lawitzky, Martin; Hirche, Sandra

    2012-01-01

    Since the strict separation of working spaces of humans and robots has experienced a softening due to recent robotics research achievements, close interaction of humans and robots comes rapidly into reach. In this context, physical human-robot interaction raises a number of questions regarding a desired intuitive robot behavior. The continuous bilateral information and energy exchange requires an appropriate continuous robot feedback. Investigating a cooperative manipulation task, the desired...

  14. Augmented Robotics Dialog System for Enhancing Human-Robot Interaction.

    Science.gov (United States)

    Alonso-Martín, Fernando; Castro-González, Aĺvaro; Luengo, Francisco Javier Fernandez de Gorostiza; Salichs, Miguel Ángel

    2015-07-03

    Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human-robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human-robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications.

  15. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    OpenAIRE

    Felipe Cid; Jose Moreno; Pablo Bustos; Pedro Núñez

    2014-01-01

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, inclu...

  16. Mixed reality and human-robot interaction

    CERN Document Server

    Wang, Xiangyu

    2011-01-01

    MR technologies play an increasing role in different aspects of human-robot interactions. The visual combination of digital contents with real working spaces creates a simulated environment that is set out to enhance these aspects. This book presents and discusses fundamental scientific issues, technical implementations, lab testing, and industrial applications and case studies of Mixed Reality in Human-Robot Interaction. It is a reference book that not only acts as meta-book in the field that defines and frames Mixed Reality use in Human-Robot Interaction, but also addresses up-coming trends

  17. Integrating human and robot decision-making dynamics with feedback : Models and convergence analysis

    NARCIS (Netherlands)

    Cao, Ming; Stewart, Andrew; Leonard, Naomi Ehrich

    2008-01-01

    Leveraging research by psychologists on human decision-making, we present a human-robot decision-making problem associated with a complex task and study the corresponding joint decision-making dynamics. The collaborative task is designed so that the human makes decisions just as human subjects make

  18. Talking to robots: on the linguistic construction of personal human-robot relations

    NARCIS (Netherlands)

    Lamers, Maarten H.; Coeckelbergh, Mark; Verbeek, Fons J.

    2010-01-01

    How should we make sense of 'personal' human-robot relations, given that many people view robots as 'mere machines'? This paper proposes that we understand human-robot relations from a phenomenological view as social relations in which robots are constructed as quasi-others. It is argued that langua

  19. Talking to robots: on the linguistic construction of personal human-robot relations

    NARCIS (Netherlands)

    Lamers, Maarten H.; Coeckelbergh, Mark; Verbeek, Fons J.

    2010-01-01

    How should we make sense of 'personal' human-robot relations, given that many people view robots as 'mere machines'? This paper proposes that we understand human-robot relations from a phenomenological view as social relations in which robots are constructed as quasi-others. It is argued that

  20. Talking to robots: on the linguistic construction of personal human-robot relations

    NARCIS (Netherlands)

    Coeckelbergh, Mark; Lamers, Maarten H.; Verbeek, Fons J.

    2010-01-01

    How should we make sense of 'personal' human-robot relations, given that many people view robots as 'mere machines'? This paper proposes that we understand human-robot relations from a phenomenological view as social relations in which robots are constructed as quasi-others. It is argued that langua

  1. Humans, Animals and Robots: A Phenomenological Approach to Human-Robot Relations.

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2010-01-01

    This paper argues that our understanding of many human-robot relations can be enhanced by comparisons with human-animal relations and by a phenomenological approach which highlights the significance of how robots appear to humans. Some potential gains of this approach are explored by discussing the

  2. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    Directory of Open Access Journals (Sweden)

    BIROUAS Flaviu Ionut

    2016-11-01

    Full Text Available This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  3. Human Resource Implications of Robotics.

    Science.gov (United States)

    Hunt, H. Allan; Hunt, Timothy L.

    A study examined the job creation and job displacement potential of industrial robots in the United States and specifically, in Michigan, by 1990. To complete an analysis of the impact of robotics on the American labor force, researchers combined data from previous forecasts of future unit and dollar sales projections and from interviews with…

  4. Human-Robot Interaction and Human Self-Realization

    DEFF Research Database (Denmark)

    2014-01-01

    The ethical debate on robots has become a cutting edge issue in many countries. It is, however, most often approached through an us-versus-them perspective—as if we were watching a soccer game and taking one side. Informed by Eastern as well as Western thought, the meta-ethical aim of this paper...... is to test the basis for this type of discrimination when it comes to human-robot interaction. Furthermore, the paper will take Heidegger's warning concerning technology as a vantage point and explore the possibility of human-robot interaction forming a praxis that might help humans to be with robots beyond...

  5. A Preliminary Study of Peer-to-Peer Human-Robot Interaction

    Science.gov (United States)

    Fong, Terrence; Flueckiger, Lorenzo; Kunz, Clayton; Lees, David; Schreiner, John; Siegel, Michael; Hiatt, Laura M.; Nourbakhsh, Illah; Simmons, Reid; Ambrose, Robert

    2006-01-01

    The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.

  6. From robot to human grasping simulation

    CERN Document Server

    León, Beatriz; Sancho-Bru, Joaquin

    2013-01-01

    The human hand and its dexterity in grasping and manipulating objects are some of the hallmarks of the human species. For years, anatomic and biomechanical studies have deepened the understanding of the human hand’s functioning and, in parallel, the robotics community has been working on the design of robotic hands capable of manipulating objects with a performance similar to that of the human hand. However, although many researchers have partially studied various aspects, to date there has been no comprehensive characterization of the human hand’s function for grasping and manipulation of

  7. The future African workplace: The use of collaborative robots in manufacturing

    Directory of Open Access Journals (Sweden)

    Andre P. Calitz

    2017-01-01

    Full Text Available Orientation: Industry 4.0 promotes technological innovations and human–robot collaboration (HRC. Human–robot interaction (HRI and HRC on the manufacturing assembly line have been implemented in numerous advanced production environments worldwide. Collaborative robots (Cobots are increasingly being used as collaborators with humans in factory production and assembly environments.Research purpose: The purpose of the research is to investigate the current use and future implementation of Cobots worldwide and its specific impact on the African workforce.Motivation for the study: Exploring the gap that exists between the international implementation of Cobots and the potential implementation and impact on the African manufacturing and assembly environment and specifically on the African workforce.Research design, approach and method: The study features a qualitative research design. An open-ended question survey was conducted amongst leading manufacturing companies in South Africa in order to determine the status and future implementation of Cobot practices. Thematic analysis and content analysis were conducted using AtlasTi.Main findings: The findings indicate that the African businesses were aware of the international business trends, regarding Cobot implementation, and the possible impact of Cobots on the African work force. Factors specifically highlighted in this study are fear of retrenchment, human–Cobot trust and the African culture.Practical implications and value-add: This study provides valuable background on the international status of Cobot implementation and the possible impact on the African workforce. The study highlights the importance of building employee trust, providing the relevant training and addressing the fear of retrenchment amongst employees.

  8. Generating human-like movements on an anthropomorphic robot using an interior point method

    Science.gov (United States)

    Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.

    2013-10-01

    In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.

  9. User localization during human-robot interaction.

    Science.gov (United States)

    Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A

    2012-01-01

    This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.

  10. User Localization During Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Miguel A. Salichs

    2012-07-01

    Full Text Available This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.

  11. Mobile Robot Collision Avoidance in Human Environments

    Directory of Open Access Journals (Sweden)

    Lingqi Zeng

    2013-01-01

    Full Text Available Collision avoidance is a fundamental requirement for mobile robots. Avoiding moving obstacles (also termed dynamic obstacles with unpredictable direction changes, such as humans, is more challenging than avoiding moving obstacles whose motion can be predicted. Precise information on the future moving directions of humans is unobtainable for use in navigation algorithms. Furthermore, humans should be able to pursue their activities unhindered and without worrying about the robots around them. In this paper, both active and critical regions are used to deal with the uncertainty of human motion. A procedure is introduced to calculate the region sizes based on worst‐case avoidance conditions. Next, a novel virtual force field‐based mobile robot navigation algorithm (termed QVFF is presented. This algorithm may be used with both holonomic and nonholonomic robots. It incorporates improved virtual force functions for avoiding moving obstacles and its stability is proven using a piecewise continuous Lyapunov function. Simulation and experimental results are provided for a human walking towards the robot and blocking the path to a goal location. Next, the proposed algorithm is compared with five state‐of‐the‐art navigation algorithms for an environment with one human walking with an unpredictable change in direction. Finally, avoidance results are presented for an environment containing three walking humans. The QVFF algorithm consistently generated collision‐free paths to the goal.

  12. An Effective Division of Labor Between Human and Robotic Agents Performing a Cooperative Assembly Task

    Science.gov (United States)

    Rehnmark, Fredrik; Bluethmann, William; Rochlis, Jennifer; Huber, Eric; Ambrose, Robert

    2003-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by human astronauts. These so-called extra-vehicular activities (EVAs) are risky, expensive and complex. Work is underway to develop a robotic astronaut's assistant that can help reduce human EVA time and workload by delivering human-like dexterous manipulation capabilities to any EVA worksite. An experiment is conducted to evaluate human-robot teaming strategies in the context of a simplified EVA assembly task in which Robonaut, a collaborative effort with the Defense Advanced Research Projects Agency (DARPA), an anthropomorphic robot works side-by-side with a human subject. Team performance is studied in an effort to identify the strengths and weaknesses of each teaming configuration and to recommend an appropriate division of labor. A shared control approach is developed to take advantage of the complementary strengths of the human teleoperator and robot, even in the presence of significant time delay.

  13. Perspectives on human-human sensorimotor interactions for the design of rehabilitation robots.

    Science.gov (United States)

    Sawers, Andrew; Ting, Lena H

    2014-10-06

    Physical interactions between patients and therapists during rehabilitation have served as motivation for the design of rehabilitation robots, yet we lack a fundamental understanding of the principles governing such human-human interactions (HHI). Here we review the literature and pose important open questions regarding sensorimotor interaction during HHI that could facilitate the design of human-robot interactions (HRI) and haptic interfaces for rehabilitation. Based on the goals of physical rehabilitation, three subcategories of sensorimotor interaction are identified: sensorimotor collaboration, sensorimotor assistance, and sensorimotor education. Prior research has focused primarily on sensorimotor collaboration and is generally limited to relatively constrained visuomotor tasks. Moreover, the mechanisms by which performance improvements are achieved during sensorimotor cooperation with haptic interaction remains unknown. We propose that the effects of role assignment, motor redundancy, and skill level in sensorimotor cooperation should be explicitly studied. Additionally, the importance of haptic interactions may be better revealed in tasks that do not require visual feedback. Finally, cooperative motor tasks that allow for motor improvement during solo performance to be examined may be particularly relevant for rehabilitation robotics. Identifying principles that guide human-human sensorimotor interactions may lead to the development of robots that can physically interact with humans in more intuitive and biologically inspired ways, thereby enhancing rehabilitation outcomes.

  14. Exploring child-robot engagement in a collaborative task

    NARCIS (Netherlands)

    Zaga, Cristina; Truong, Khiet P.; Lohse, Manja; Evers, Vanessa

    2014-01-01

    Imagine a room with toys scattered on the floor and a robot that is motivating a small group of children to tidy up. This scenario poses real-world challenges for the robot, e.g., the robot needs to navigate autonomously in a cluttered environment, it needs to classify and grasp objects, and it need

  15. Humanlike Robots - Synthetically Mimicking Humans

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2012-01-01

    Nature inspired many inventions and the field of technology that is based on the mimicking or inspiration of nature is widely known as Biomimetics and it is increasingly leading to many new capabilities. There are numerous examples of biomimetic successes including the copying of fins for swimming, and the inspiration of the insects and birds flight. More and more commercial implementations of biomimetics are appearing and behaving lifelike and applications are emerging that are important to our daily life. Making humanlike robots is the ultimate challenge to biomimetics and, for many years, it was considered science fiction, but such robots are becoming an engineering reality. Advances in producing such robot are allowing them to perform impressive functions and tasks. The development of such robots involves addressing many challenges and is raising concerns that are related to fear of their application implications and potential ethical issues. In this paper, the state-of-the-art of humanlike robots, potential applications and challenges will be reviewed.

  16. Robot Companions: Technology for Humans

    CERN Document Server

    Kernbach, Serge

    2011-01-01

    Creation of devices and mechanisms which help people has a long history. Their inventors always targeted practical goals such as irrigation, harvesting, devices for construction sites, measurement, and, last but not least, military tasks for different mechanical and later mechatronic systems. Development of such assisting mechanisms counts back to Greek engineering, came through Middle Ages and led finally in XIX and XX centuries to autonomous devices, which we call today "Robots". This chapter provides overview of several robotic technologies, introduces bio-/chemo- hybrid and collective systems and discuss their applications in service areas.

  17. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    OpenAIRE

    Addythia Saphala; Prianggada Indra Tanaya

    2015-01-01

    Robotic Operation System (ROS) is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR), which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM).

  18. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Directory of Open Access Journals (Sweden)

    Addythia Saphala

    2015-10-01

    Full Text Available Robotic Operation System (ROS is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR, which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM.

  19. Remote radiation mapping and preliminary intervention using collaborating (European and Russian) mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Piotrowski, L.; Trouville, B. [Electricite de France, 78 - Chatou (France). Direction Etudes et Recherches; Loane, E. [Kentree Ltd., Cork (Ireland); Halbach, M. [Universite Libre de Bruxelles (Belgium); Sidorkin, N. [NIKIMT, Moscow (Russian Federation)

    1996-12-01

    The primary objective of the IMPACT project is to develop a light-weight and inexpensive mobile robot that can be used for rapid inspection missions within nuclear power plants. These interventions are to cover normal, incident and accident situations and aim at primary reconnaissance (or 'data collecting') missions. The IMPACT robot was demonstrated (April 1996) in a realistic mission at the Russian nuclear plant SMOLENSK. The demonstration, composed of 2 independent but consecutive missions, was held in a radioactive zone near turbine {ne} 4 of Unit 2: remote radiation mapping with localisation of radioactive sources by the IMPACT robot equipped with a (Russian) gamma-radiation sensor; deployment of a Russian intervention robot for the construction of a protective lead shield around one of the identified sources and verification that the ambient radiation level has been reduce. This mission was executed remotely by 2 mobile robots working in collaboration: a NIKIMT robot equipped with a manipulator arm and carrying leads bricks and the IMPACT robot of mission I (radiation measurements and 'side-observer'). This manuscript describes (a) the technical characteristics of the IMPACT reconnaissance robot (3-segmented, caterpillar-tracked body; 6 video cameras placed around the mobile platform with simultaneous presentation of up to 4 video images at the control post; ability to detach remotely one of the robot's segments (i.e. the robot can divide itself into 2 separate mobile robots)) and (b) the SMOLENSK demonstration. (author)

  20. The Architecture of Children's Use of Language and Tools When Problem Solving Collaboratively with Robotics

    Science.gov (United States)

    Mills, Kathy A.; Chandra, Vinesh; Park, Ji Yong

    2013-01-01

    This paper demonstrates, following Vygotsky, that language and tool use has a critical role in the collaborative problem-solving behaviour of school-age children. It reports original ethnographic classroom research examining the convergence of speech and practical activity in children's collaborative problem solving with robotics programming…

  1. Group Tasks, Activities, Dynamics, and Interactions in Collaborative Robotics Projects with Elementary and Middle School Children

    Science.gov (United States)

    Yuen, Timothy T.; Boecking, Melanie; Stone, Jennifer; Tiger, Erin Price; Gomez, Alvaro; Guillen, Adrienne; Arreguin, Analisa

    2014-01-01

    Robotics provide the opportunity for students to bring their individual interests, perspectives and areas of expertise together in order to work collaboratively on real-world science, technology, engineering and mathematics (STEM) problems. This paper examines the nature of collaboration that manifests in groups of elementary and middle school…

  2. Warning signals for poor performance improve human-robot interaction

    NARCIS (Netherlands)

    Brule, R. van den; Bijlstra, G.; Dotsch, R.; Haselager, W.F.G.; Wigboldus, D.H.J.

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot's nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the robot

  3. Warning signals for poor performance improve human-robot interaction

    NARCIS (Netherlands)

    Brule, R. van den; Bijlstra, G.; Dotsch, R.; Haselager, W.F.G.; Wigboldus, D.H.J.

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot's nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the robot

  4. Robotic collaborative technology alliance: an open architecture approach to integrated research

    Science.gov (United States)

    Dean, Robert Michael S.; DiBerardino, Charles A.

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities [1]. Research occurs in 5 main Task Areas: Intelligence, Perception, Dexterous Manipulation and Unique Mobility (DMUM), Human Robot Interaction (HRI), and Integrated Research (IR). This last task of Integrated Research is especially critical and challenging. Individual research components can only be fully assessed when integrated onto a robot where they interact with other aspects of the system to create cross-Task capabilities which move beyond the State of the Art. Adding to the complexity, the RCTA is comprised of 12+ independent organizations across the United States. Each has its own constraints due to development environments, ITAR, "lab" vs "real-time" implementations, and legacy software investments from previous and ongoing programs. We have developed three main components to manage the Integration Task. The first is RFrame, a data-centric transport agnostic middleware which unifies the disparate environments, protocols, and data collection mechanisms. Second is the modular Intelligence Architecture built around the Common World Model (CWM). The CWM instantiates a Common Data Model and provides access services. Third is RIVET, an ITAR free Hardware-In-The-Loop simulator based on 3D game technology. RIVET provides each researcher a common test-bed for development prior to integration, and a regression test mechanism. Once components are integrated and verified, they are released back to the consortium to provide the RIVET baseline for further research. This approach allows Integration of new and legacy systems built upon different architectures, by application of Open Architecture principles.

  5. Control architecture for human-robot integration: application to a robotic wheelchair.

    Science.gov (United States)

    Galindo, Cipriano; Gonzalez, Javier; Fernández-Madrigal, Juan-Antonio

    2006-10-01

    Completely autonomous performance of a mobile robot within noncontrolled and dynamic environments is not possible yet due to different reasons including environment uncertainty, sensor/software robustness, limited robotic abilities, etc. But in assistant applications in which a human is always present, she/he can make up for the lack of robot autonomy by helping it when needed. In this paper, the authors propose human-robot integration as a mechanism to augment/improve the robot autonomy in daily scenarios. Through the human-robot-integration concept, the authors take a further step in the typical human-robot relation, since they consider her/him as a constituent part of the human-robot system, which takes full advantage of the sum of their abilities. In order to materialize this human integration into the system, they present a control architecture, called architecture for human-robot integration, which enables her/him from a high decisional level, i.e., deliberating a plan, to a physical low level, i.e., opening a door. The presented control architecture has been implemented to test the human-robot integration on a real robotic application. In particular, several real experiences have been conducted on a robotic wheelchair aimed to provide mobility to elderly people.

  6. Human Factors and Robotics: Current Status and Future Prospects.

    Science.gov (United States)

    Parsons, H. McIlvaine; Kearsley, Greg P.

    The principal human factors engineering issue in robotics is the division of labor between automation (robots) and human beings. This issue reflects a prime human factors engineering consideration in systems design--what equipment should do and what operators and maintainers should do. Understanding of capabilities and limitations of robots and…

  7. Human-Robot Teamwork in USAR Environments: The TRADR Project

    NARCIS (Netherlands)

    Greeff, J. de; Hindriks, K.; Neerincx, M.A.; Kruijff-Korbayova, I.

    2015-01-01

    The TRADR project aims at developing methods and models for human-robot teamwork, enabling robots to operate in search and rescue environments alongside humans as teammates, rather than as tools. Through a user-centered cognitive engineering method, human-robot teamwork is analyzed, modeled, impleme

  8. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of

  9. Human-Robot Teams in Entertainment and Other Everyday Scenarios

    CERN Document Server

    Fazli, Pooyan

    2009-01-01

    A new and relatively unexplored research direction in robotics systems is the coordination of humans and robots working as a team. In this paper, we focus upon problem domains and tasks in which multiple robots, humans and other agents are cooperating through coordination to satisfy a set of goals or to maximize utility. We are primarily interested in applications of human robot coordination in entertainment and other activities of daily life. We discuss the teamwork problem and propose an architecture to address this.

  10. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  11. COMPARING PUMA ROBOT ARM WITH THE HUMAN ARM MOVEMENTS; AN ALTERNATIVE ROBOTIC ARM SHOULDER DESIGN

    OpenAIRE

    Mustafa BOZDEMİR; ADIGÜZEL, Esat

    1999-01-01

    Using the robotic arms instead of human power becomes increasingly widespread nowadays. Widening of the robotic arms usage field is parallel to improvement of movement capability of it. In this study PUMA Robotic Arm System that is a developed system of the robotic arms was compared with a human arm due to movement. A new joint was added to PUMA Robotic Arm System to have the movements similar to the human shoulder joint. Thus, a shoulder was designed that can make movements through the sides...

  12. Robotics-based synthesis of human motion

    KAUST Repository

    Khatib, O.

    2009-05-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  13. Robotics-based synthesis of human motion.

    Science.gov (United States)

    Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  14. Human-Robot Teaming in a Multi-Agent Space Assembly Task

    Science.gov (United States)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.

  15. Multiagent Modeling and Simulation in Human-Robot Mission Operations Work System Design

    Science.gov (United States)

    Sierhuis, Maarten; Clancey, William J.; Sims, Michael H.; Shafto, Michael (Technical Monitor)

    2001-01-01

    This paper describes a collaborative multiagent modeling and simulation approach for designing work systems. The Brahms environment is used to model mission operations for a semi-autonomous robot mission to the Moon at the work practice level. It shows the impact of human-decision making on the activities and energy consumption of a robot. A collaborative work systems design methodology is described that allows informal models, created with users and stakeholders, to be used as input to the development of formal computational models.

  16. Human Robotic Systems (HRS): Space Robotics Challenge Project

    Data.gov (United States)

    National Aeronautics and Space Administration — During 2013 and 2015, the DARPA Robotics Challenge explored through a competition the tasks and technologies for robots to operate in a natural and man-made...

  17. Human Robotic Systems (HRS): Controlling Robots over Time Delay Element

    Data.gov (United States)

    National Aeronautics and Space Administration — This element involves the development of software that enables easier commanding of a wide range of NASA relevant robots through the Robot Application Programming...

  18. Does Robotic Telerounding Enhance Nurse-Physician Collaboration Satisfaction About Care Decisions?

    Science.gov (United States)

    Bettinelli, Michele; Lei, Yuxiu; Beane, Matt; Mackey, Caleb; Liesching, Timothy N

    2015-08-01

    Delivering healthcare using remote robotic telepresence is an evolving practice in medical and surgical intensive critical care units and will likely have varied implications for work practices and working relationships in intensive care units. Our study assessed the nurse-physician collaboration satisfaction about care decisions from surgical intensive critical care nurses during remote robotic telepresence night rounds in comparison with conventional telephone night rounds. This study used a randomized trial to test whether robotic telerounding enhances the nurse-physician collaboration satisfaction about care decisions. A physician randomly used either the conventional telephone or the RP-7 robot (InTouch(®) Health, Santa Barbara, CA) to perform nighttime rounding in a surgical intensive care unit. The Collaboration and Satisfaction About Care Decisions (CSACD) survey instrument was used to measure the nurse-physician collaboration. The CSACD scores were compared using the signed-rank test with a significant p value of ≤0.05. From December 1, 2011 to December 13, 2012, 20 off-shift nurses submitted 106 surveys during telephone rounds and 108 surveys during robot rounds. The median score of surveys during robot rounds was slightly but not significantly higher than telephone rounds (51.3 versus 50.5; p=0.3). However, the CSACD score was significantly increased from baseline with robot rounds (51.3 versus 43.0; p=0.01), in comparison with telephone rounds (50.5 versus 43.0; p=0.09). The mediators, including age, working experience, and robot acceptance, were not significantly (p>0.1) correlated with the CSACD score difference (robot versus telephone). Robot rounding in the intensive care unit was comparable but not superior to the telephone in regard to the nurse-physician collaboration and satisfaction about care decision. The working experience and technology acceptance of intensive care nurses did not contribute to the preference of night shift rounding

  19. Human-robot interaction strategies for walker-assisted locomotion

    CERN Document Server

    Cifuentes, Carlos A

    2016-01-01

    This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed. .

  20. Mood contagion of robot body language in human robot interaction

    NARCIS (Netherlands)

    Xu, J.; Broekens, J.; Hindriks, K.; Neerincx, M.A.

    2015-01-01

    The aim of our work is to design bodily mood expressions of humanoid robots for interactive settings that can be recognized by users and have (positive) effects on people who interact with the robots. To this end, we develop a parameterized behavior model for humanoid robots to express mood through

  1. Mood contagion of robot body language in human robot interaction

    NARCIS (Netherlands)

    Xu, J.; Broekens, D.J.; Hindriks, K.V.; Neerincx, M.A.

    2015-01-01

    The aim of our work is to design bodily mood expressions of humanoid robots for interactive settings that can be recognized by users and have (positive) effects on people who interact with the robots. To this end, we develop a parameterized behavior model for humanoid robots to express mood through

  2. Collaborative Research in the Digital Humanities

    CERN Document Server

    Deegan, Marilyn

    2012-01-01

    Collaboration within digital humanities is both a pertinent and a pressing topic as the traditional mode of the humanist, working alone in his or her study, is supplemented by explicitly co-operative, interdependent and collaborative research. This is particularly true where computational methods are employed in large-scale digital humanities projects. This book, which celebrates the contributions of Harold Short to this field, presents fourteen essays by leading authors in the digital humanities. It addresses several issues of collaboration, from the multiple perspectives of institutions, pro

  3. Improving Social Odometry Robot Networks with Distributed Reputation Systems for Collaborative Purposes

    Directory of Open Access Journals (Sweden)

    Zorana Bankovic

    2011-11-01

    Full Text Available The improvement of odometry systems in collaborative robotics remains an important challenge for several applications. Social odometry is a social technique which confers the robots the possibility to learn from the others. This paper analyzes social odometry and proposes and follows a methodology to improve its behavior based on cooperative reputation systems. We also provide a reference implementation that allows us to compare the performance of the proposed solution in highly dynamic environments with the performance of standard social odometry techniques. Simulation results quantitatively show the benefits of this collaborative approach that allows us to achieve better performances than social odometry.

  4. A Case Study of Collaboration with Multi-Robots and Its Effect on Children's Interaction

    Science.gov (United States)

    Hwang, Wu-Yuin; Wu, Sheng-Yi

    2014-01-01

    Learning how to carry out collaborative tasks is critical to the development of a student's capacity for social interaction. In this study, a multi-robot system was designed for students. In three different scenarios, students controlled robots in order to move dice; we then examined their collaborative strategies and their behavioral…

  5. Stereo-vision-based perception capabilities developed during the Robotics Collaborative Technology Alliances program

    Science.gov (United States)

    Rankin, Arturo; Bajracharya, Max; Huertas, Andres; Howard, Andrew; Moghaddam, Baback; Brennan, Shane; Ansar, Adnan; Tang, Benyang; Turmon, Michael; Matthies, Larry

    2010-04-01

    The Robotics Collaborative Technology Alliances (RCTA) program, which ran from 2001 to 2009, was funded by the U.S. Army Research Laboratory and managed by General Dynamics Robotic Systems. The alliance brought together a team of government, industrial, and academic institutions to address research and development required to enable the deployment of future military unmanned ground vehicle systems ranging in size from man-portables to ground combat vehicles. Under RCTA, three technology areas critical to the development of future autonomous unmanned systems were addressed: advanced perception, intelligent control architectures and tactical behaviors, and human-robot interaction. The Jet Propulsion Laboratory (JPL) participated as a member for the entire program, working four tasks in the advanced perception technology area: stereo improvements, terrain classification, pedestrian detection in dynamic environments, and long range terrain classification. Under the stereo task, significant improvements were made to the quality of stereo range data used as a front end to the other three tasks. Under the terrain classification task, a multi-cue water detector was developed that fuses cues from color, texture, and stereo range data, and three standalone water detectors were developed based on sky reflections, object reflections (such as trees), and color variation. In addition, a multi-sensor mud detector was developed that fuses cues from color stereo and polarization sensors. Under the long range terrain classification task, a classifier was implemented that uses unsupervised and self-supervised learning of traversability to extend the classification of terrain over which the vehicle drives to the far-field. Under the pedestrian detection task, stereo vision was used to identify regions-of-interest in an image, classify those regions based on shape, and track detected pedestrians in three-dimensional world coordinates. To improve the detectability of partially occluded

  6. Talking to Robots: On the Linguistic Construction of Personal Human-Robot Relations

    Science.gov (United States)

    Coeckelbergh, Mark

    How should we make sense of 'personal' human-robot relations, given that many people view robots as 'mere machines'? This paper proposes that we understand human-robot relations from a phenomenological view as social relations in which robots are constructed as quasi-others. It is argued that language mediates in this construction. Responding to research by Turkle and others, it is shown that our talking to robots (as opposed to talking about robots) reveals a shift from an impersonal third-person to a personal second-person perspective, which constitutes a different kind of human-robot relation. The paper makes suggestions for empirical research to further study this social-phenomenological process.

  7. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments.

    Science.gov (United States)

    Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori

    2017-08-15

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.

  8. Companies' human capital required for collaboration

    DEFF Research Database (Denmark)

    Albats, Ekaterina; Bogers, Marcel; Podmetina, Daria

    building, relationship building, IPR management and negotiation for the context of collaboration with universities. Our research has revealed an importance of expectation management skills for university-industry collaboration (UIC) context. We found that human capital for UIC is to be continuously......Universities are widely acknowledged as an important source of knowledge for corporate innovation, and collaboration with universities plays an important role in companies’ open innovation strategy. However, little is known about the human capital components required for collaboration...... with universities. Analysing the results of the survey among over 500 company managers we define the universal employees’ skills required for company’ successful collaborations with external stakeholders. Then through analysing qualitative interviews data we distinguish between these skills and capabilities...

  9. Intelligence for Human-Assistant Planetary Surface Robots

    Science.gov (United States)

    Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.

    2006-01-01

    The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.

  10. Kansei Behavior of Robots Following Instruction of Human

    Directory of Open Access Journals (Sweden)

    Satoru Shibata

    2015-10-01

    Full Text Available When coexisting with humans and support them, robots need to be instructed to move by humans without burden, and the motion should not to instigate anxiety but to be accepted to human psychology. To realize Kansei behavior of robots following instruction of human, the concept of “Kansei transfer function”, which can add softness and smoothness to robots, is explained, and the effectiveness of its applications to humanrobot interfaces is confirmed from psychological aspects.

  11. Common Metrics for Human-Robot Interaction

    Science.gov (United States)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  12. Warning Signals for Poor Performance Improve Human-Robot Interaction

    NARCIS (Netherlands)

    van den Brule, Rik; Bijlstra, Gijsbert; Dotsch, Ron; Haselager, Pim; Wigboldus, Daniel HJ

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot’s nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the robot

  13. Catz, Dogz & Robotz? Human interaction with domestic robotic devices

    OpenAIRE

    Lawson, Shaun; Chesney, Thomas

    2008-01-01

    This special issue of the Journal of Physical Agents is devoted to human interaction with domestic robots. The form, features and future, of domestic robotic devices, from entertainment-based agents through to robotic cleaners, companions, assistants and helpers, are considered and discussed.

  14. Warning Signals for Poor Performance Improve Human-Robot Interaction

    NARCIS (Netherlands)

    van den Brule, Rik; Bijlstra, Gijsbert; Dotsch, Ron; Haselager, Pim; Wigboldus, Daniel HJ

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot’s nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the robot

  15. Warning signals for poor performance improve human-robot interaction

    NARCIS (Netherlands)

    Brule, R. van den; Bijlstra, G.; Dotsch, R.; Haselager, W.F.G.; Wigboldus, D.H.J.

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot's nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the

  16. Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2009-01-01

    ’s pose. The resulting pose estimates are used to identify humans who wish to be approached and interacted with. The interaction motion of the robot is based on adaptive potential functions centered around the person that respect the persons social spaces. The method is tested in experiments......Abstract—This paper introduces a new method to determine a person’s pose based on laser range measurements. Such estimates are typically a prerequisite for any human-aware robot navigation, which is the basis for effective and timeextended interaction between a mobile robot and a human. The robot...... uses observed information from a laser range finder to detect persons and their position relative to the robot. This information together with the motion of the robot itself is fed through a Kalman filter, which utilizes a model of the human kinematic movement to produce an estimate of the person...

  17. A Framework for Collaborative Quadrotor - Ground Robot Missions

    Science.gov (United States)

    2011-12-01

    quadrotor. The Qbot is accessible through three different block sets: the Roomba block set to drive the 35 vehicle, the HIL block set to read from...1) The Roomba Initialize block located in the Simulink Library Browser, under QuaRC Targets / Devices / Third-Party / iRobot / Roomba

  18. Integration of a Skill-based Collaborative Mobile Robot in a Smart Cyber-Physical Environment

    DEFF Research Database (Denmark)

    Andersen, Rasmus Eckholdt; Hansen, Emil Blixt; Cerny, David

    2017-01-01

    The goal of this paper is to investigate the benefits of integrating collaborative robotic manipulators with autonomous mobile platforms for flexible part feeding processes in an Industry 4.0 production facility. The paper presents Little Helper 6 (LH6), consisting of a MiR100, UR5, a Robotiq 3...

  19. Collaborative robotic biomechanical interactions and gait adjustments in young, non-impaired individuals.

    Science.gov (United States)

    Dionisio, Valdeci C; Brown, David A

    2016-06-16

    Collaborative robots are used in rehabilitation and are designed to interact with the client so as to provide the ability to assist walking therapeutically. One such device is the KineAssist which was designed to interact, either in a self-driven mode (SDM) or in an assist mode (AM), with neurologically-impaired individuals while they are walking on a treadmill surface. To understand the level of transparency (i.e., interference with movement due to the mechanical interface) between human and robot, and to estimate and account for changes in the kinetics and kinematics of the gait pattern, we tested the KineAssist under conditions of self-drive and horizontal push assistance. The aims of this study were to compare the joint kinematics, forces and moments during walking at a fixed constant treadmill belt speed and constrained walking cadence, with and without the robotic device (OUT) and to compare the biomechanics of assistive and self-drive modes in the device. Twenty non-neurologically impaired adults participated in this study. We evaluated biomechanical parameters of walking at a fixed constant treadmill belt speed (1.0 m/s), with and without the robotic device in assistive mode. We also tested the self-drive condition, which enables the user to drive the speed and direction of a treadmill belt. Hip, knee and ankle angular displacements, ground reaction forces, hip, knee and ankle moments, and center of mass displacement were compared "in" vs "out" of the device. A repeated measures ANOVA test was applied with the three level factor of condition (OUT, AM, and SDM), and each participant was used as its own comparison. When comparing "in" and "out" of the device, we did not observe any interruptions and/or reversals of direction of the basic gait pattern trajectory, but there was increased ankle and hip angular excursions, vertical ground reaction force and hip moments and reduced center of mass displacement during the "in device" condition. Comparing assistive

  20. Robot, human and communication; Robotto/ningen/comyunikeshon

    Energy Technology Data Exchange (ETDEWEB)

    Suehiro, T.

    1996-04-10

    Recently, some interests on the robots working with human beings under the same environment as the human beings and living with the human beings were promoting. In such robots, more suitability for environment and more robustness of system are required than those in conventional robots. Above all, communication of both the human beings and the robots on their cooperations is becoming a new problem. Hitherto, for the industrial robot, cooperation between human beings and robot was limited on its programming. As this was better for repeated operation of the same motion, its adoptable work was limited to some comparatively simpler one in factory and was difficult to change its content partially or to apply the other work. Furthermore, on the remote-controlled intelligent work robot represented by the critical work robot, its cooperation between the human beings and the robot can be conducted with the operation at remote location. In this paper, the communication of the robots lived with the human beings was examined. 17 refs., 1 fig.

  1. Collaborative task planning for an internet based multi-operator multi-robot system

    Institute of Scientific and Technical Information of China (English)

    GAO Sheng; ZHAO Jie; CAI He-gao

    2005-01-01

    In an Internet based multi-operator and multi-robot system (IMOMR), operators have to work collaboratively to overcome the constraints of space and time. Inherently, the activities among them can be defined as a computer-supported cooperative work (CSCW). As a practical application of CSCW, a collaborative task planning system (CTPS) for IMOMR is proposed in this paper on the basis of Petri nets. Its definition, components design, and concrete implementation are given in detail, respectively. As a result, a clear collaboration mechanism of multiple operators in an IMOMR is obtained to guarantee their task planning.

  2. Using "human state aware" robots to enhance physical human-robot interaction in a cooperative scenario.

    Science.gov (United States)

    Guerrero, Carlos Rodriguez; Fraile Marinero, Juan Carlos; Turiel, Javier Perez; Muñoz, Victor

    2013-11-01

    Human motor performance, speed and variability are highly susceptible to emotional states. This paper reviews the impact of the emotions on the motor control performance, and studies the possibility of improving the perceived skill/challenge relation on a multimodal neural rehabilitation scenario, by means of a biocybernetic controller that modulates the assistance provided by a haptic controlled robot in reaction to undesirable physical and mental states. Results from psychophysiological, performance and self assessment data for closed loop experiments in contrast with their open loop counterparts, suggest that the proposed method had a positive impact on the overall challenge/skill relation leading to an enhanced physical human-robot interaction experience.

  3. Comparison of Human and Humanoid Robot Control of Upright Stance

    OpenAIRE

    Peterka, Robert J.

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a sugges...

  4. Multimodal interaction for human-robot teams

    Science.gov (United States)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  5. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    Science.gov (United States)

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  6. Learning Human Aspects of Collaborative Software Development

    Science.gov (United States)

    Hadar, Irit; Sherman, Sofia; Hazzan, Orit

    2008-01-01

    Collaboration has become increasingly widespread in the software industry as systems have become larger and more complex, adding human complexity to the technological complexity already involved in developing software systems. To deal with this complexity, human-centric software development methods, such as Extreme Programming and other agile…

  7. Learning Human Aspects of Collaborative Software Development

    Science.gov (United States)

    Hadar, Irit; Sherman, Sofia; Hazzan, Orit

    2008-01-01

    Collaboration has become increasingly widespread in the software industry as systems have become larger and more complex, adding human complexity to the technological complexity already involved in developing software systems. To deal with this complexity, human-centric software development methods, such as Extreme Programming and other agile…

  8. Human motion behavior while interacting with an industrial robot.

    Science.gov (United States)

    Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus

    2012-01-01

    Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.

  9. US Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance 2014 Capstone Experiment

    Science.gov (United States)

    2016-07-01

    successful on second attempt Hotel 2 7 3 63 8 12.5 Robot scraped along wall for one flight of stairs due to poor positioning (operator error ), behavior...activities. Four capabilities were evaluated as part of distinct Integrated Research Assessments (IRA): Human Robot Interaction Modalities, Semantic...Method 7 4.1.3 Navigation Method 9 4.2 IRA: Search and Observe Doorways 11 4.2.1 Search 12 4.2.2 Observe 12 4.3 IRA: Human Robot Interaction

  10. Modeling Leadership Styles in Human-Robot Team Dynamics

    Science.gov (United States)

    Cruz, Gerardo E.

    2005-01-01

    The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.

  11. Modeling Leadership Styles in Human-Robot Team Dynamics

    Science.gov (United States)

    Cruz, Gerardo E.

    2005-01-01

    The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.

  12. European Robotics Challenges

    OpenAIRE

    Halt, Lorenz; Bubeck, Alexander

    2014-01-01

    The European Robotics Challenges (EuRoC) aims at strengthening collaboration and cross-fertilization between the industrial and the research community by launching three industrially-relevant challenges in European robotics with applicability to the factory of the future. To qualify for admission, potential challengers are asked to solve a simulated task. The aim of this talk is to describe the development of a simulated human-robot collaboration task using ROS/gazebo. Furthermore several bac...

  13. A robot safety experiment varying robot speed and contrast with human decision cost.

    Science.gov (United States)

    Etherton, J; Sneckenberger, J E

    1990-09-01

    An industrial robot safety experiment was performed to find out how quickly subjects responded to an unexpected robot motion at different speeds of the robot arm, and how frequently they failed to detect a motion that should have been detected. Robotics technicians risk being fatally injured if a robot should trap them against a fixed object. The value of the experimentation lies in its ability to show that this risk can be reduced by a design change. If the robot is moving at a slow speed, during programming and troubleshooting tasks, then the worker has sufficient time to actuate an emergency stop device before the robot can reach the person. The dependent variable in the experiment was the overrun distance (beyond an expected stopping point) that a robot arm travelled before a person actuated a stop pushbutton. Results of this experiment demonstrated that the speed of the robot arm and the implied decision cost for hitting an emergency stop button had a significant effect on human reaction time. At a fairly high level of ambient lighting (560 lux), fixed-level changes in the luminance contrast between the robot arm and its background did not significantly affect human reaction time.

  14. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot

    Directory of Open Access Journals (Sweden)

    Nicole Mirnig

    2017-05-01

    Full Text Available We conducted a user study for which we purposefully programmed faulty behavior into a robot’s routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human–robot interaction. In contrast to our previous work, where we studied video material in which robot errors occurred unintentionally, in the herein reported user study, we purposefully elicited robot errors to further explore the human interaction partners’ social signals following a robot error. Our participants interacted with a human-like NAO, and the robot either performed faulty or free from error. First, the robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. After the interaction, we asked the participants to rate the robot’s anthropomorphism, likability, and perceived intelligence. We also interviewed the participants on their opinion about the interaction. Additionally, we video-coded the social signals the participants showed during their interaction with the robot as well as the answers they provided the robot with. Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly. We did not find significant differences in people’s ratings of the robot’s anthropomorphism and perceived intelligence. The qualitative data confirmed the questionnaire results in showing that although the participants recognized the robot’s mistakes, they did not necessarily reject the erroneous robot. The annotations of the video data further showed that gaze shifts (e.g., from an object to the robot or vice versa and laughter are typical reactions to unexpected robot behavior

  15. Using Empathy to Improve Human-Robot Relationships

    Science.gov (United States)

    Pereira, André; Leite, Iolanda; Mascarenhas, Samuel; Martinho, Carlos; Paiva, Ana

    For robots to become our personal companions in the future, they need to know how to socially interact with us. One defining characteristic of human social behaviour is empathy. In this paper, we present a robot that acts as a social companion expressing different kinds of empathic behaviours through its facial expressions and utterances. The robot comments the moves of two subjects playing a chess game against each other, being empathic to one of them and neutral towards the other. The results of a pilot study suggest that users to whom the robot was empathic perceived the robot more as a friend.

  16. New trends in medical and service robots human centered analysis, control and design

    CERN Document Server

    Chevallereau, Christine; Pisla, Doina; Bleuler, Hannes; Rodić, Aleksandar

    2016-01-01

    Medical and service robotics integrates several disciplines and technologies such as mechanisms, mechatronics, biomechanics, humanoid robotics, exoskeletons, and anthropomorphic hands. This book presents the most recent advances in medical and service robotics, with a stress on human aspects. It collects the selected peer-reviewed papers of the Fourth International Workshop on Medical and Service Robots, held in Nantes, France in 2015, covering topics on: exoskeletons, anthropomorphic hands, therapeutic robots and rehabilitation, cognitive robots, humanoid and service robots, assistive robots and elderly assistance, surgical robots, human-robot interfaces, BMI and BCI, haptic devices and design for medical and assistive robotics. This book offers a valuable addition to existing literature.

  17. Human-Robot Interaction: A Survey

    Science.gov (United States)

    2007-01-01

    itself [210, 275]. One relevant study explored how ROOMBA robots are used in practice without attempting to make operators use the robots in a specific way...Robotics. Berlin: Springer, 2003. [99] J. Forlizzi and C. DiSalvo, “Service robots in the domestic environment: A study of the Roomba vacuum in the home

  18. Human Robotic Systems (HRS): Robotic Technologies for Asteroid Missions Element

    Data.gov (United States)

    National Aeronautics and Space Administration — During 2014, the Robotic Technologies for Asteroid Missions activity has four tasks:Asteroid Retrieval Capture Mechanism Development and Testbed;Mission Operations...

  19. Human Robotic Systems (HRS): Robotic ISRU Acquisition Element

    Data.gov (United States)

    National Aeronautics and Space Administration — During 2014, the Robotic ISRU Resource Acquisition project element will develop two technologies:Exploration Ground Data Systems (xGDS)Sample Acquisition on...

  20. Forming Human-Robot Teams Across Time and Space

    Science.gov (United States)

    Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.

    2012-01-01

    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot

  1. Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard

    The field of collaborative industrial robots is currently developing fast both in the industry and in the scientific community. Companies such as Rethink Robotics and Universal Robots are redefining the concept of an industrial robot and entire new markets and use cases are becoming relevant...... for robotic automation. Where industrial robots traditionally are placed behind security fences and programmed to perform simple, repetitive tasks, this next generation of robots will be able to work side-by-side with humans and collaborate on completing common tasks. This thesis investigates methods for fast...... and intuitive programming and interaction with collaborative, industrial robots. The work is divided into two areas: Vision-enabled robotic skills and projection mapping interfaces. The purpose of robotic skills in general is to allow non-experts in robotics to program robots in an intuitive manner...

  2. Humans, animals, robots: handling volumic data flows

    Science.gov (United States)

    Petrov, Valery

    1999-08-01

    Human visual system is properly suited for reliable and adequate volumetric perception of natural environment. Volumetric data flows coming from the outer physical space are easily acquired, transferred and processed by eye-brain system in real time. This relates also to the animals which use different complicate mechanisms of optical volumetric data acquisition and can navigate safely at high speeds. On the contrary machine vision systems utilizing currently the stereoscopic effect in attempt to achieve volumetric data presentation are very slow, bulky and in a way inelegantly devised. The stereoscopy itself seems can hardly organize the adequate, real time volumetric robot vision.

  3. Managing uncertainty in collaborative robotics engineering projects: The influence of task structure and peer interaction

    Science.gov (United States)

    Jordan, Michelle

    Uncertainty is ubiquitous in life, and learning is an activity particularly likely to be fraught with uncertainty. Previous research suggests that students and teachers struggle in their attempts to manage the psychological experience of uncertainty and that students often fail to experience uncertainty when uncertainty may be warranted. Yet, few educational researchers have explicitly and systematically observed what students do, their behaviors and strategies, as they attempt to manage the uncertainty they experience during academic tasks. In this study I investigated how students in one fifth grade class managed uncertainty they experienced while engaged in collaborative robotics engineering projects, focusing particularly on how uncertainty management was influenced by task structure and students' interactions with their peer collaborators. The study was initiated at the beginning of instruction related to robotics engineering and preceded through the completion of several long-term collaborative robotics projects, one of which was a design project. I relied primarily on naturalistic observation of group sessions, semi-structured interviews, and collection of artifacts. My data analysis was inductive and interpretive, using qualitative discourse analysis techniques and methods of grounded theory. Three theoretical frameworks influenced the conception and design of this study: community of practice, distributed cognition, and complex adaptive systems theory. Uncertainty was a pervasive experience for the students collaborating in this instructional context. Students experienced uncertainty related to the project activity and uncertainty related to the social system as they collaborated to fulfill the requirements of their robotics engineering projects. They managed their uncertainty through a diverse set of tactics for reducing, ignoring, maintaining, and increasing uncertainty. Students experienced uncertainty from more different sources and used more and

  4. Self-Organization and Human Robots

    Directory of Open Access Journals (Sweden)

    Chris Lucas

    2008-11-01

    Full Text Available Humans are rather funny things, we often tend to imagine that we are so ?special?, so divorced by our supposed ?intelligence? from the influences of the ?natural world? and so unique in our ?abstracting? abilities. We have this persistent delusion, evident since ancient Greek times, that we are ?rational?, that we can behave as ?disinterested observers? of our world, which manifests in AI thought today in a belief that, in a like manner, we can ?design?, God like, from afar, our replacements, those ?super-robots? that will do everything that we can imagine doing, but in much ?better? ways than we can achieve, and yet can avoid doing anything ?nasty?, i.e. can overcome our many human failings - obeying, I suppose, in the process, Asimov?s three ?laws of robotics?. Such human naiveté proves, in fact, to be quite amusing, at least to those of us ?schooled? in AI history. When we look at the aspirations and the expectations of our early ?pioneers?, and compare them to the actual reality of today, then we must, it seems, re-discover the meaning of the word ?humility?. Enthusiasm, good as it may be, needs to be moderated with a touch of ?common sense?, and if our current ways of doing things in our AI world don?t really work as we had hoped, then perhaps it is time to try something different (Lucas, C., 1999?

  5. Safe physical human robot interaction- past, present and future

    Energy Technology Data Exchange (ETDEWEB)

    Pervez, Aslam; Ryu, Jeha [Gwangju Institute of Science and Technology, Gwangju (Korea, Republic of)

    2008-03-15

    When a robot physically interacts with a human user, the requirements should be drastically changed. The most important requirement is the safety of the human user in the sense that robot should not harm the human in any situation. During the last few years, research has been focused on various aspects of safe physical human robot interaction. This paper provides a review of the work on safe physical interaction of robotic systems sharing their workspace with human users (especially elderly people). Three distinct areas of research are identified: interaction safety assessment, interaction safety through design, and interaction safety through planning and control. The paper then highlights the current challenges and available technologies and points out future research directions for realization of a safe and dependable robotic system for human users

  6. Robots and Humans in Planetary Exploration: Working Together?

    Science.gov (United States)

    Landis, Geoffrey A.; Lyons, Valerie (Technical Monitor)

    2002-01-01

    Today's approach to human-robotic cooperation in planetary exploration focuses on using robotic probes as precursors to human exploration. A large portion of current NASA planetary surface exploration is focussed on Mars, and robotic probes are seen as precursors to human exploration in: Learning about operation and mobility on Mars; Learning about the environment of Mars; Mapping the planet and selecting landing sites for human mission; Demonstration of critical technology; Manufacture fuel before human presence, and emplace elements of human-support infrastructure

  7. The human hand as an inspiration for robot hand development

    CERN Document Server

    Santos, Veronica

    2014-01-01

    “The Human Hand as an Inspiration for Robot Hand Development” presents an edited collection of authoritative contributions in the area of robot hands. The results described in the volume are expected to lead to more robust, dependable, and inexpensive distributed systems such as those endowed with complex and advanced sensing, actuation, computation, and communication capabilities. The twenty-four chapters discuss the field of robotic grasping and manipulation viewed in light of the human hand’s capabilities and push the state-of-the-art in robot hand design and control. Topics discussed include human hand biomechanics, neural control, sensory feedback and perception, and robotic grasp and manipulation. This book will be useful for researchers from diverse areas such as robotics, biomechanics, neuroscience, and anthropologists.

  8. Human exploration and settlement of Mars - The roles of humans and robots

    Science.gov (United States)

    Duke, Michael B.

    1991-01-01

    The scientific objectives and strategies for human settlement on Mars are examined in the context of the Space Exploration Initiative (SEI). An integrated strategy for humans and robots in the exploration and settlement of Mars is examined. Such an effort would feature robotic, telerobotic, and human-supervised robotic phases.

  9. Ethorobotics: A New Approach to Human-Robot Relationship

    Directory of Open Access Journals (Sweden)

    Ádám Miklósi

    2017-06-01

    Full Text Available Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human behaviors: ecology and ethology. We show how the paradox of the so called “uncanny valley hypothesis” can be solved by applying the “niche” concept to social robots, and relying on the natural behavior of humans. Instead of striving to build human-like social robots, engineers should construct robots that are able to maximize their performance in their niche (being optimal for some specific functions, and if they are endowed with appropriate form of social competence then humans will eventually interact with them independent of their embodiment. This new discipline, which we call ethorobotics, could change social robotics, giving a boost to new technical approaches and applications.

  10. The Law of Attraction in Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Eunil Park

    2012-07-01

    Full Text Available Following the law of attraction in human-human interaction, this paper examines the effects of a robot's personality and a human's personality in various human-robot interactions. This study was conducted using robots that were programmed to mimic both extroverted and introverted personality types, as well as humans who were classified as having introverted, extroverted or intermediate personality types. Using a 3 × 2 between-subjects experiment with 120 participants, the results indicated that participants who interacted with a similar personality robot were more comfortable than those who engaged with a different personality robot. Yet, the evaluation of social presence presented an opposing result. Both the implications and limitations of the present study, as well as guidelines for future research, are discussed.

  11. Collaborative Assistive Robot for Mobility Enhancement (CARMEN) The bare necessities assisted wheelchair navigation and beyond

    CERN Document Server

    Urdiales, Cristina

    2012-01-01

    In nowadays aging society, many people require mobility assistance. Sometimes, assistive devices need a certain degree of autonomy when users' disabilities difficult manual control. However, clinicians report that excessive assistance may lead to loss of residual skills and frustration. Shared control focuses on deciding when users need help and providing it. Collaborative control aims at giving just the right amount of help in a transparent, seamless way. This book presents the collaborative control paradigm. User performance may be indicative of physical/cognitive condition, so it is used to decide how much help is needed. Besides, collaborative control integrates machine and user commands so that people contribute to self-motion at all times. Collaborative control was extensively tested for 3 years using a robotized wheelchair at a rehabilitation hospital in Rome with volunteer inpatients presenting different disabilities, ranging from mild to severe. We also present a taxonomy of common metrics for wheelc...

  12. Human-like Compliance for Dexterous Robot Hands

    Science.gov (United States)

    Jau, Bruno M.

    1995-01-01

    This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications.

  13. Connecting Robots and Humans in Mars Exploration

    Science.gov (United States)

    Friedman, Louis

    2000-07-01

    Mars exploration is a very special public interest. It's preeminence in the national space policy calling for "sustained robotic presence on the surface," international space policy (witness the now aborted international plan for sample return, and also aborted Russian "national Mars program") and the media attention to Mars exploration are two manifestations of that interest. Among a large segment of the public there is an implicit (mis)understanding that we are sending humans to Mars. Even among those who know that isn't already a national or international policy, many think it is the next human exploration goal. At the same time the resources for Mars exploration in the U.S. and other country's space programs are a very small part of space budgets. Very little is being applied to direct preparations for human flight. This was true before the 1999 mission losses in the United States, and it is more true today. The author's thesis is that the public interest and the space program response to Mars exploration are inconsistent. This inconsistency probably results from an explicit space policy contradiction: Mars exploration is popular because of the implicit pull of Mars as the target for human exploration, but no synergy is permitted between the human and robotic programs to carry out the program. It is not permitted because of narrow, political thinking. In this paper we try to lay out the case for overcoming that thinking, even while not committing to any premature political initiative. This paper sets out a rationale for Mars exploration and uses it to then define recommended elements of the programs: missions, science objectives, technology. That consideration is broader than the immediate issue of recovering from the failures of Mars Climate OrbIter, Mars Polar Lander and the Deep Space 2 microprobes in late 1999. But we cannot ignore those failures. They are causing a slow down Mars exploration. Not only were the three missions lost, with their planned

  14. Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.

    Science.gov (United States)

    Hongbo Wang; Kosuge, K

    2012-01-01

    Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.

  15. Human-Robot Teaming: From Space Robotics to Self-Driving Cars

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    In this talk, I describe how NASA Ames has been developing and testing robots for space exploration. In our research, we have focused on studying how human-robot teams can increase the performance, reduce the cost, and increase the success of space missions. A key tenet of our work is that humans and robots should support one another in order to compensate for limitations of manual control and autonomy. This principle has broad applicability beyond space exploration. Thus, I will conclude by discussing how we have worked with Nissan to apply our methods to self-driving cars, enabling humans to support autonomous vehicles operating in unpredictable and difficult situations.

  16. Approaching human performance the functionality-driven Awiwi robot hand

    CERN Document Server

    Grebenstein, Markus

    2014-01-01

    Humanoid robotics have made remarkable progress since the dawn of robotics. So why don't we have humanoid robot assistants in day-to-day life yet? This book analyzes the keys to building a successful humanoid robot for field robotics, where collisions become an unavoidable part of the game. The author argues that the design goal should be real anthropomorphism, as opposed to mere human-like appearance. He deduces three major characteristics to aim for when designing a humanoid robot, particularly robot hands: _ Robustness against impacts _ Fast dynamics _ Human-like grasping and manipulation performance   Instead of blindly copying human anatomy, this book opts for a holistic design me-tho-do-lo-gy. It analyzes human hands and existing robot hands to elucidate the important functionalities that are the building blocks toward these necessary characteristics.They are the keys to designing an anthropomorphic robot hand, as illustrated in the high performance anthropomorphic Awiwi Hand presented in this book.  ...

  17. Robotic Joints Support Horses and Humans

    Science.gov (United States)

    2008-01-01

    A rehabilitative device first featured in Spinoff 2003 is not only helping human patients regain the ability to walk, but is now helping our four-legged friends as well. The late James Kerley, a prominent Goddard Space Flight Center researcher, developed cable-compliant mechanisms in the 1980s to enable sounding rocket assemblies and robots to grip or join objects. In cable-compliant joints (CCJs), short segments of cable connect structural elements, allowing for six directions of movement, twisting, alignment, and energy damping. Kerley later worked with Goddard s Wayne Eklund and Allen Crane to incorporate the cable-compliant mechanisms into a walker for human patients to support the pelvis and imitate hip joint movement.

  18. How Safe the Human-Robot Coexistence Is? Theoretical Presentation

    Directory of Open Access Journals (Sweden)

    Olesya Ogorodnikova

    2009-12-01

    Full Text Available It is evident that industrial robots are able to generate forces high enough toinjure a human. To prevent this, robots have to work within a restricted space that includesthe entire region reachable by any part of the robot. However, more and more robotapplications require human intervention due to superior abilities for some tasksperformance. In this paper we introduce danger/safety indices which indicate a level of therisk during interaction with robots, which are based on a robot’s critical characteristicsand on a human’s physical and mental constrains. Collision model for a 1 DOF robot and“human” was developed. Case study with further simulations was provided for the PUMA560 robot.

  19. Force/position control of a robot manipulator for human-robot interaction

    Directory of Open Access Journals (Sweden)

    Neranon Paramin

    2016-01-01

    Full Text Available With regard to both human and robot capabilities, human-robot interaction provides several benefits, and this will be significantly developed and implemented. This work focuses on the development of real-time external force/position control used for human-robot interaction. The force-controlled robotic system integrated with proportional integral control was performed and evaluated to ensure its reliably and timely operational characteristics, in which appropriate proportional integral gains were experimentally adopted using a set of virtual crank-turning tests. The designed robotic system is made up of a robot manipulator arm, an ATI Gamma multi-axis force/torque sensor and a real-time external PC based control system. A proportional integral controller has been developed to provide stable and robust force control on unknown environmental stiffness and motion. To quantify its effectiveness, the robotic system has been verified through a comprehensive set of experiments, in which force measurement and ALTER real-time path control systems were evaluated. In summary, the results indicated satisfactorily stable performance of the robot force/position control system. The gain tuning for proportional plus integral control algorithm was successfully implemented. It can be reported that the best performance as specified by the error root mean square method of the radial force is observed with proportional and integral gains of 0.10 and 0.005 respectively.

  20. Human-Robot Interaction in High Vulnerability Domains

    Science.gov (United States)

    Gore, Brian F.

    2016-01-01

    Future NASA missions will require successful integration of the human with highly complex systems. Highly complex systems are likely to involve humans, automation, and some level of robotic assistance. The complex environments will require successful integration of the human with automation, with robots, and with human-automation-robot teams to accomplish mission critical goals. Many challenges exist for the human performing in these types of operational environments with these kinds of systems. Systems must be designed to optimally integrate various levels of inputs and outputs based on the roles and responsibilities of the human, the automation, and the robots; from direct manual control, shared human-robotic control, or no active human control (i.e. human supervisory control). It is assumed that the human will remain involved at some level. Technologies that vary based on contextual demands and on operator characteristics (workload, situation awareness) will be needed when the human integrates into these systems. Predictive models that estimate the impact of the technologies on the system performance and the on the human operator are also needed to meet the challenges associated with such future complex human-automation-robot systems in extreme environments.

  1. A posthuman liturgy? Virtual worlds, robotics, and human flourishing.

    Science.gov (United States)

    Shatzer, Jacob

    2013-01-01

    In order to inspire a vision of biotechnology that affirms human dignity and human flourishing, the author poses questions about virtual reality and the use of robotics in health care. Using the concept of 'liturgy' and an anthropology of humans as lovers, the author explores how virtual reality and robotics in health care shape human moral agents, and how such shaping could influence the way we do or do not pursue a 'posthuman' future.

  2. Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    Science.gov (United States)

    Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer

    2011-01-01

    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator

  3. Task Refinement for Autonomous Robots using Complementary Corrective Human Feedback

    Directory of Open Access Journals (Sweden)

    Cetin Mericli

    2011-06-01

    Full Text Available A robot can perform a given task through a policy that maps its sensed state to appropriate actions. We assume that a hand-coded controller can achieve such a mapping only for the basic cases of the task. Refining the controller becomes harder and gets more tedious and error prone as the complexity of the task increases. In this paper, we present a new learning from demonstration approach to improve the robot's performance through the use of corrective human feedback as a complement to an existing hand-coded algorithm. The human teacher observes the robot as it performs the task using the hand-coded algorithm and takes over the control to correct the behavior when the robot selects a wrong action to be executed. Corrections are captured as new state-action pairs and the default controller output is replaced by the demonstrated corrections during autonomous execution when the current state of the robot is decided to be similar to a previously corrected state in the correction database. The proposed approach is applied to a complex ball dribbling task performed against stationary defender robots in a robot soccer scenario, where physical Aldebaran Nao humanoid robots are used. The results of our experiments show an improvement in the robot's performance when the default hand-coded controller is augmented with corrective human demonstration.

  4. Cognitive neuroscience robotics A synthetic approaches to human understanding

    CERN Document Server

    Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi

    2016-01-01

    Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume A describes how human cognitive functions can be replicated in artificial systems such as robots, and investigates how artificial systems could acquire intelligent behaviors through interaction with others and their environment.

  5. Applications of artificial intelligence in safe human-robot interactions.

    Science.gov (United States)

    Najmaei, Nima; Kermani, Mehrdad R

    2011-04-01

    The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.

  6. An intrinsically safe mechanism for physically coupling humans with robots.

    Science.gov (United States)

    O'Neill, Gerald; Patel, Harshil; Artemiadis, Panagiotis

    2013-06-01

    Robots are increasingly used in tasks that include physical interaction with humans. Examples can be found in the area of rehabilitation robotics, power augmentation robots, as well as assistive and orthotic devices. However, current methods of physically coupling humans with robots fail to provide intrinsic safety, adaptation and efficiency, which limit the application of wearable robotics only to laboratory and controlled environments. In this paper we present the design and verification of a novel mechanism for physically coupling humans and robots. The device is intrinsically safe, since it is based on passive, non-electric features that are not prone to malfunctions. The device is capable of transmitting forces and torques in all directions between the human user and the robot. Moreover, its re-configurable nature allows for easy and consistent adjustment of the decoupling force. The latter makes the mechanism applicable to a wide range of human-robot coupling applications, ranging from low-force rehabilitation-therapy scenarios to high-force augmentation cases.

  7. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Directory of Open Access Journals (Sweden)

    Joachim de Greeff

    Full Text Available Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference; the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  8. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Science.gov (United States)

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  9. Transferring human impedance regulation skills to robots

    CERN Document Server

    Ajoudani, Arash

    2016-01-01

    This book introduces novel thinking and techniques to the control of robotic manipulation. In particular, the concept of teleimpedance control as an alternative method to bilateral force-reflecting teleoperation control for robotic manipulation is introduced. In teleimpedance control, a compound reference command is sent to the slave robot including both the desired motion trajectory and impedance profile, which are then realized by the remote controller. This concept forms a basis for the development of the controllers for a robotic arm, a dual-arm setup, a synergy-driven robotic hand, and a compliant exoskeleton for improved interaction performance.

  10. Robot perception errors and human resolution strategies in situated human-robot dialogue

    OpenAIRE

    Schutte, Niels; Kelleher, John; MacNamee, Brian

    2017-01-01

    We performed an experiment in which human participants interacted through a natural language dialogue interface with a simulated robot to fulfil a series of object manipulation tasks. We introduced errors into the robot’s perception, and observed the resulting problems in the dialogues and their resolutions. We then introduced different methods for the user to request information about the robot’s understanding of the environment. We quantify the impact of perception errors on the dialogues, ...

  11. Simplified Human-Robot Interaction: Modeling and Evaluation

    Directory of Open Access Journals (Sweden)

    Balazs Daniel

    2013-10-01

    Full Text Available In this paper a novel concept of human-robot interaction (HRI modeling is proposed. Including factors like trust in automation, situational awareness, expertise and expectations a new user experience framework is formed for industrial robots. Service Oriented Robot Operation, proposed in a previous paper, creates an abstract level in HRI and it is also included in the framework. This concept is evaluated with exhaustive tests. Results prove that significant improvement in task execution may be achieved and the new system is more usable for operators with less experience with robotics; personnel specific for small and medium enterprises (SMEs.

  12. Dynamic Task Allocation for Human-Robot Teams

    NARCIS (Netherlands)

    Giele, T.R.A.; Mioch, T.; Neerincx, M.A.; Meyer, J.J.C.

    2015-01-01

    Artificial agents, such as robots, are increasingly deployed for teamwork in dynamic, high-demand environments. This paper presents a framework, which applies context information to establish task (re)allocations that improve human-robot team’s performance. Based on the framework, a model for adapti

  13. Collaborative gaming and competition for CS-STEM education using SPHERES Zero Robotics

    Science.gov (United States)

    Nag, Sreeja; Katz, Jacob G.; Saenz-Otero, Alvar

    2013-02-01

    There is widespread investment of resources in the fields of Computer Science, Science, Technology, Engineering, Mathematics (CS-STEM) education to improve STEM interests and skills. This paper addresses the goal of revolutionizing student education using collaborative gaming and competition, both in virtual simulation environments and on real hardware in space. The concept is demonstrated using the SPHERES Zero Robotics (ZR) Program which is a robotics programming competition. The robots are miniature satellites called SPHERES—an experimental test bed developed by the MIT SSL on the International Space Station (ISS) to test navigation, formation flight and control algorithms in microgravity. The participants compete to win a technically challenging game by programming their strategies into the SPHERES satellites, completely from a web browser. The programs are demonstrated in simulation, on ground hardware and then in a final competition when an astronaut runs the student software aboard the ISS. ZR had a pilot event in 2009 with 10 High School (HS) students, a nationwide pilot tournament in 2010 with over 200 HS students from 19 US states, a summer tournament in 2010 with ˜150 middle school students and an open-registration tournament in 2011 with over 1000 HS students from USA and Europe. The influence of collaboration was investigated by (1) building new web infrastructure and an Integrated Development Environment where intensive inter-participant collaboration is possible, (2) designing and programming a game to solve a relevant formation flight problem, collaborative in nature—and (3) structuring a tournament such that inter-team collaboration is mandated. This paper introduces the ZR web tools, assesses the educational value delivered by the program using space and games and evaluates the utility of collaborative gaming within this framework. There were three types of collaborations as variables—within matches (to achieve game objectives), inter

  14. Comparison of human and humanoid robot control of upright stance.

    Science.gov (United States)

    Peterka, Robert J

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to approximately 1Hz) dynamic characteristics of human stance control. These subsystems are (1) a "sensory integration" mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an "effort control" mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the

  15. Managing Uncertainty during Collaborative Problem Solving in Elementary School Teams: The Role of Peer Influence in Robotics Engineering Activity

    Science.gov (United States)

    Jordan, Michelle E.; McDaniel, Reuben R., Jr.

    2014-01-01

    This study investigated how interaction with peers influenced the ways students managed uncertainty during collaborative problem solving in a 5th-grade class. The analysis focused on peer responses to individuals' attempts to manage uncertainty they experienced while engaged in collaborative efforts to design, build, and program robots and…

  16. Managing Uncertainty during Collaborative Problem Solving in Elementary School Teams: The Role of Peer Influence in Robotics Engineering Activity

    Science.gov (United States)

    Jordan, Michelle E.; McDaniel, Reuben R., Jr.

    2014-01-01

    This study investigated how interaction with peers influenced the ways students managed uncertainty during collaborative problem solving in a 5th-grade class. The analysis focused on peer responses to individuals' attempts to manage uncertainty they experienced while engaged in collaborative efforts to design, build, and program robots and…

  17. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    Science.gov (United States)

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  18. Human-Robot Site Survey and Sampling for Space Exploration

    Science.gov (United States)

    Fong, Terrence; Bualat, Maria; Edwards, Laurence; Flueckiger, Lorenzo; Kunz, Clayton; Lee, Susan Y.; Park, Eric; To, Vinh; Utz, Hans; Ackner, Nir

    2006-01-01

    NASA is planning to send humans and robots back to the Moon before 2020. In order for extended missions to be productive, high quality maps of lunar terrain and resources are required. Although orbital images can provide much information, many features (local topography, resources, etc) will have to be characterized directly on the surface. To address this need, we are developing a system to perform site survey and sampling. The system includes multiple robots and humans operating in a variety of team configurations, coordinated via peer-to-peer human-robot interaction. In this paper, we present our system design and describe planned field tests.

  19. Robot and Human Surface Operations on Solar System Bodies

    Science.gov (United States)

    Weisbin, C. R.; Easter, R.; Rodriguez, G.

    2001-01-01

    This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.

  20. Robots

    Institute of Scientific and Technical Information of China (English)

    驷萍

    1997-01-01

    一篇介绍机器人的文章写得如此耐读,如此清新! 首先.我们弄清了robot一词的来历: It was used first in 1920 in a play by Czcchoslovak writer Karel Capek.The wordrobot comes from the Czech word for slave. 上句提供了一个时间:1920。文章接着便抓住这个时间做文章: 且The word robot.and robots themselves are less than 100 years old.But humanshave been dreaming of real and imaginary copies of themselves for thousands of years. 文章就这样写出了波澜,1920年和 thousands of years自然而然构成了强烈对比。1954年和1960s是两个谈及机器人时不得不一提的时间: In 1954,the world’s first robot was produced in the United States. During the 1960s,the first industrial robots appeared beside human workers infactories.下面这句让我们体味到 the Czech word for slave中的 slave不仅言之有理,而且影视和小说里的机器人“造反”,进而 killed the humans who made them的情节也“事出有因”: What do today’s robots do?Robots do work.Work that human consideruninteresting or dangerous.…do many jobs that people consider tiring. 本文将机器人的“功过”放在一起写,笔

  1. Framing Human-Robot Task Communication as a POMDP

    CERN Document Server

    Woodward, Mark P

    2012-01-01

    As general purpose robots become more capable, pre-programming of all tasks at the factory will become less practical. We would like for non-technical human owners to be able to communicate, through interaction with their robot, the details of a new task; we call this interaction "task communication". During task communication the robot must infer the details of the task from unstructured human signals and it must choose actions that facilitate this inference. In this paper we propose the use of a partially observable Markov decision process (POMDP) for representing the task communication problem; with the unobservable task details and unobservable intentions of the human teacher captured in the state, with all signals from the human represented as observations, and with the cost function chosen to penalize uncertainty. We work through an example representation of task communication as a POMDP, and present results from a user experiment on an interactive virtual robot, compared with a human controlled virtual...

  2. Robot Tracking of Human Subjects in Field Environments

    Science.gov (United States)

    Graham, Jeffrey; Shillcutt, Kimberly

    2003-01-01

    Future planetary exploration will involve both humans and robots. Understanding and improving their interaction is a main focus of research in the Intelligent Systems Branch at NASA's Johnson Space Center. By teaming intelligent robots with astronauts on surface extra-vehicular activities (EVAs), safety and productivity can be improved. The EVA Robotic Assistant (ERA) project was established to study the issues of human-robot teams, to develop a testbed robot to assist space-suited humans in exploration tasks, and to experimentally determine the effectiveness of an EVA assistant robot. A companion paper discusses the ERA project in general, its history starting with ASRO (Astronaut-Rover project), and the results of recent field tests in Arizona. This paper focuses on one aspect of the research, robot tracking, in greater detail: the software architecture and algorithms. The ERA robot is capable of moving towards and/or continuously following mobile or stationary targets or sequences of targets. The contributions made by this research include how the low-level pose data is assembled, normalized and communicated, how the tracking algorithm was generalized and implemented, and qualitative performance reports from recent field tests.

  3. An Integrated Human System Interaction (HSI) Framework for Human-Agent Team Collaboration Project

    Data.gov (United States)

    National Aeronautics and Space Administration — As space missions become more complex and as mission demands increase, robots, human-robot mixed initiative teams and software autonomy applications are needed to...

  4. Presence of Life-Like Robot Expressions Influences Children’s Enjoyment of Human-Robot Interactions in the Field

    NARCIS (Netherlands)

    Cameron, David; Fernando, Samuel; Collins, Emily; Millings, Abigail; Moore, Roger; Sharkey, Amanda; Evers, Vanessa; Prescott, Tony

    Emotions, and emotional expression, have a broad influence on the interactions we have with others and are thus a key factor to consider in developing social robots. As part of a collaborative EU project, this study examined the impact of lifelike affective facial expressions, in the humanoid robot

  5. Presence of Life-Like Robot Expressions Influences Children’s Enjoyment of Human-Robot Interactions in the Field

    NARCIS (Netherlands)

    Cameron, David; Fernando, Samuel; Collins, Emily; Millings, Abigail; Moore, Roger; Sharkey, Amanda; Evers, Vanessa; Prescott, Tony

    2015-01-01

    Emotions, and emotional expression, have a broad influence on the interactions we have with others and are thus a key factor to consider in developing social robots. As part of a collaborative EU project, this study examined the impact of lifelike affective facial expressions, in the humanoid robot

  6. Human-robot interaction research for current and future military applications: from the laboratory to the field

    Science.gov (United States)

    Cosenzo, Keryl A.; Barnes, Michael J.

    2010-04-01

    Unmanned air and ground vehicles are an integral part of military operations. However, the use of the robot goes beyond moving the platform from point A to point B. The operator who is responsible for the robots will have a multitude of tasks to complete; route planning for the robot, monitoring the robot during the mission, monitoring and interpreting the sensor information received by the robot, and communicating that information with others. As a result, the addition of robotics can be considered a burden on the operator if not integrated appropriately into the system. The goal of the US Army Research Laboratory's Human Robotic Interaction (HRI) Program is to enable the Soldier to use robotic systems in a way that increases performance, that is, to facilitate effective collaboration between unmanned systems and the Soldier. The program uses multiple research approaches; modeling, simulation, laboratory experimentation, and field experimentation to achieve this overall goal. We have basic and applied research in HRI to include supervisory control, mounted and dismounted robotic control, and mitigation strategies for the HRI environment. This paper describes our HRI program across these various domains and how our research is supporting both current and future military operations.

  7. Human Robotic Systems (HRS): Robonaut 2 Technologies Element

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the Robonaut 2 (R2) Technology Project Element within Human Robotic Systems (HRS) is to developed advanced technologies for infusion into the Robonaut 2...

  8. Hybrid Battery Ultracapacitor System For Human Robotic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposal is to develop a hybrid battery-ultra capacitor storage system that powers human-robotic systems in space missions. Space missions...

  9. A new method to evaluate human-robot system performance

    Science.gov (United States)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  10. The Role of Users' Concepts of the Robot in Human-Robot Spatial Instruction

    Science.gov (United States)

    Fischer, Kerstin

    Spatial instructions are always delivered for a particular communication partner. In this paper I investigate the the role of users' concepts of their communication partner in human-robot interaction by analysing the spatial language choices speakers make in three comparable corpora with three different robots. I show that the users' concepts of their artificial communication partner is only mildly shaped by the appearance of the robot, and thus that users do not mindlessly use all clues they can get about their communication partner in order to formulate their spatial instructions. Instead, spatial instruction in human-robot interaction also depends on the users' models of the communication situation, as well as on external variables, such as gender.

  11. Robotic Powered Transfer Mechanism modeling on Human Muscle Structure

    Science.gov (United States)

    Saito, Yukio

    It is considered in engineering that one power source can operate one joint. However, support movement mechanism of living organism is multi joint movement mechanism. Considerably different from mechanical movement mechanism, two pairs of uni-articular muscles and a pair of bi-articular muscles are involved in it. In leg, movements observed in short run including leg idling, heel contact and toeing are operated by bi-articular muscles of the thigh showing strong legs to support body weight. Pursuit of versatility in welfare robot brings its comparison with conventional machinery or industrial robot to the fore. Request for safety and technology allowing elderly people to operate the robot is getting stronger in the society. The robot must be safe when it is used together with other welfare equipment and simpler system avoiding difficult operation has to be constructed. Appearance of recent care and assistance robot is getting similar to human arm in comparison with industrial robot. Being easily able to imagine from industrial robot, mid-heavyweight articulated robot to support 60-70kgf combined with large output motor and reduction gears is next to impossible to be installed in the bath room. This research indicated that upper limb arm and lower limb thigh of human and animals are holding coalitional muscles and movement of uni-artcular muscle and bi-articular muscle conjure the image of new actuators.

  12. Cognitive neuroscience robotics B analytic approaches to human understanding

    CERN Document Server

    Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi

    2016-01-01

    Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume B describes to what extent cognitive science and neuroscience have revealed the underlying mechanism of human cognition, and investigates how development of neural engineering and advances in other disciplines could lead to deep understanding of human cognition.

  13. Human-like robots for space and hazardous environments

    Science.gov (United States)

    1994-01-01

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  14. The Law of Attraction in Human-Robot Interaction

    OpenAIRE

    Eunil Park; Dallae Jin; del Pobil, Angel P.

    2012-01-01

    Following the law of attraction in human‐human interaction, this paper examines the effects of a robot’s personality and a human’s personality in various human‐robot interactions. This study was conducted using robots that were programmed to mimic both extroverted and introverted personality types, as well as humans who were classified as having introverted, extroverted or intermediate personality types. Using a 3 x 2 between‐ subjects experiment with 120 participants, the results indicated t...

  15. Affect in Human-Robot Interaction

    Science.gov (United States)

    2014-01-01

    Werry, I., Rae , J., Dickerson, P., Stribling, P., & Ogden, B. (2002). Robotic Playmates: Analysing Interactive Competencies of Children with Autism...R.C., & Cameron, J.M. (1997). Multiagent Mission Specification and Execution. Autonomous Robots, 4(1), 29-52. 31. Manual for MissionLab Version 7.0

  16. Moral appearances: emotions, robots, and human morality

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2010-01-01

    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satis

  17. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  18. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker’s hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers’ gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  19. A multi-sensorial hybrid control for robotic manipulation in human-robot workspaces.

    Science.gov (United States)

    Pomares, Jorge; Perea, Ivan; García, Gabriel J; Jara, Carlos A; Corrales, Juan A; Torres, Fernando

    2011-01-01

    Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach.

  20. A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

    Directory of Open Access Journals (Sweden)

    Juan A. Corrales

    2011-10-01

    Full Text Available Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach.

  1. Preliminary Framework for Human-Automation Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spielman, Zachary Alexander [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The Department of Energy’s Advanced Reactor Technologies Program sponsors research, development and deployment activities through its Next Generation Nuclear Plant, Advanced Reactor Concepts, and Advanced Small Modular Reactor (aSMR) Programs to promote safety, technical, economical, and environmental advancements of innovative Generation IV nuclear energy technologies. The Human Automation Collaboration (HAC) Research Project is located under the aSMR Program, which identifies developing advanced instrumentation and controls and human-machine interfaces as one of four key research areas. It is expected that the new nuclear power plant designs will employ technology significantly more advanced than the analog systems in the existing reactor fleet as well as utilizing automation to a greater extent. Moving towards more advanced technology and more automation does not necessary imply more efficient and safer operation of the plant. Instead, a number of concerns about how these technologies will affect human performance and the overall safety of the plant need to be addressed. More specifically, it is important to investigate how the operator and the automation work as a team to ensure effective and safe plant operation, also known as the human-automation collaboration (HAC). The focus of the HAC research is to understand how various characteristics of automation (such as its reliability, processes, and modes) effect an operator’s use and awareness of plant conditions. In other words, the research team investigates how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. This report addresses the Department of Energy milestone M4AT-15IN2302054, Complete Preliminary Framework for Human-Automation Collaboration, by discussing the two phased development of a preliminary HAC framework. The framework developed in the first phase was used as the

  2. Performance Analysis Of A Upnp/Dhcompliant Robotic Adapter For Collaborative Tasks Development

    Directory of Open Access Journals (Sweden)

    Alejandro Alvarez Vazquez

    2012-02-01

    Full Text Available This paper describes the performance analysis of an adapter in accordance with standard UPnP DHCompliant (Digital Home Compliant for a service robot. The DHCompliant adapter has been developed to solve some limitations that UPnP protocol suffers and to develop new DHC concepts. Moreover, it showcases with a particular example how the open protocol DHC is useful for the development of collaborative tasks, localization, energy management and other fields altogether. That interoperability is being done between devices obtaining a virtual device which can obtain the controlpoint logic and the device logic simultaneously.

  3. A Taxonomy of Human-Agent Team Collaborations

    NARCIS (Netherlands)

    Neef, R.M.

    2006-01-01

    Future command teams will be heavily supported by artificial actors. This paper introduces a taxonomy of collaboration types in human – agent teams. Using two classifying dimensions, coordination type and collaboration type, eight different classes of human – agent collaborations transpire. These cl

  4. An Exoskeleton Robot for Human Forearm and Wrist Motion Assist

    Science.gov (United States)

    Ranathunga Arachchilage Ruwan Chandra Gopura; Kiguchi, Kazuo

    The exoskeleton robot is worn by the human operator as an orthotic device. Its joints and links correspond to those of the human body. The same system operated in different modes can be used for different fundamental applications; a human-amplifier, haptic interface, rehabilitation device and assistive device sharing a portion of the external load with the operator. We have been developing exoskeleton robots for assisting the motion of physically weak individuals such as elderly or slightly disabled in daily life. In this paper, we propose a three degree of freedom (3DOF) exoskeleton robot (W-EXOS) for the forearm pronation/ supination motion, wrist flexion/extension motion and ulnar/radial deviation. The paper describes the wrist anatomy toward the development of the exoskeleton robot, the hardware design of the exoskeleton robot and EMG-based control method. The skin surface electromyographic (EMG) signals of muscles in forearm of the exoskeletons' user and the hand force/forearm torque are used as input information for the controller. By applying the skin surface EMG signals as main input signals to the controller, automatic control of the robot can be realized without manipulating any other equipment. Fuzzy control method has been applied to realize the natural and flexible motion assist. Experiments have been performed to evaluate the proposed exoskeleton robot and its control method.

  5. The Geology Robot: A Collaborative Effort for improving Outcrop Visualization and Analysis

    Science.gov (United States)

    Fredrick, K. C.; Valoski, M. P.; Rodi, A. F.

    2010-12-01

    ) an opportunity to use a simple tool to accomplish previously difficult, dangerous, or even impossible tasks. This work is an example of cross-discipline collaboration on our campus. It was conceived from a Geologist’s point of view, shared with a Robotics expert, and offered as a challenge to the university’s Robotics students. Several interested students designed and built the robot from scratch as an extra-curricular project. This is a great demonstration of capturing the interests of students across disciplinary boundaries to achieve a unique and creative outcome.

  6. Muecas: a multi-sensor robotic head for affective human robot interaction and imitation.

    Science.gov (United States)

    Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro

    2014-04-28

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.

  7. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    Directory of Open Access Journals (Sweden)

    Felipe Cid

    2014-04-01

    Full Text Available This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System, the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.

  8. Mobile app for human-interaction with sitter robots

    Science.gov (United States)

    Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.

    2017-05-01

    Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that

  9. Human guidance of mobile robots in complex 3D environments using smart glasses

    Science.gov (United States)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  10. A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  11. Extending NGOMSL Model for Human-Humanoid Robot Interaction in the Soccer Robotics Domain

    Directory of Open Access Journals (Sweden)

    Rajesh Elara Mohan

    2008-01-01

    Full Text Available In the field of human-computer interaction, the Natural Goals, Operators, Methods, and Selection rules Language (NGOMSL model is one of the most popular methods for modelling knowledge and cognitive processes for rapid usability evaluation. The NGOMSL model is a description of the knowledge that a user must possess to operate the system represented as elementary actions for effective usability evaluations. In the last few years, mobile robots have been exhibiting a stronger presence in commercial markets and very little work has been done with NGOMSL modelling for usability evaluations in the human-robot interaction discipline. This paper focuses on extending the NGOMSL model for usability evaluation of human-humanoid robot interaction in the soccer robotics domain. The NGOMSL modelled human-humanoid interaction design of Robo-Erectus Junior was evaluated and the results of the experiments showed that the interaction design was able to find faults in an average time of 23.84 s. Also, the interaction design was able to detect the fault within the 60 s in 100% of the cases. The Evaluated Interaction design was adopted by our Robo-Erectus Junior version of humanoid robots in the RoboCup 2007 humanoid soccer league.

  12. Robotics

    Science.gov (United States)

    Popov, E. P.; Iurevich, E. I.

    The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.

  13. Anthropomorphic Design of the Human-Like Walking Robot

    Institute of Scientific and Technical Information of China (English)

    Ming-Hsun Chiang; Fan-Ren Chang

    2013-01-01

    In this paper,we present a new concept of the mechanical design of a humanoid robot.The goal is to build a humanoid robot utilizing a new structure which is more suitable for human-like walking with the characteristics of the knee stretch,heel-contact,and toe-off.Inspired by human skeleton,we made an anthropomorphic pelvis for the humanoid robot.In comparison with conventional humanoid robots,with such the anthropomorphic pelvis,our robot is capable of adjusting the center of gravity of the upper body by the motion of pelvic tilt,thus reducing the required torque at the ankle joint and the velocity variations in human-like walking.With more precise analysis of the foot mechanism,the fixed-length inverted pendulum can be used to describe the dynamics of biped walking,thus preventing redundant works and power consumption in length variable inverted pendulum system.As the result of the new structure we propose,a humanoid robot is able to walk with human-like gait.

  14. Sampling Based Trajectory Planning for Robots in Dynamic Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    2010-01-01

    Open-ended human environments, such as pedestrian streets, hospital corridors, train stations etc., are places where robots start to emerge. Hence, being able to plan safe and natural trajectories in these dynamic environments is an important skill for future generations of robots. In this work...... method for selecting the best trajectory in the RRT, according to the cost of traversing a potential field. Furthermore the RRT expansion is enhanced to direct the search and account for the kinodynamic robot constraints. A model predictive control (MPC) approach is taken to accommodate...

  15. Collaborative multi-target tracking using networked micro-robotic vehicles

    Science.gov (United States)

    Biswas, Subir; Gupta, Sonny; Yu, Fan; Wu, Tao

    2007-04-01

    This paper presents a collaborative target tracking framework, in which distributed mechanisms are developed for tracking multiple mobile targets using a team of networked micro robotic vehicles. Applications of such a framework would include detection of multi-agent intrusion, network-assisted attack localization, and other collaborative search scenarios. The key idea of the developed framework is to design distributed algorithms that can be executed by tracking entities using a mobile ad hoc network. The paper comprises the following components. First, the software and hardware architectural detail of a Swarm Capable Autonomous Vehicle (SCAV) system that is used as the mobile platform in our target tracking application is presented. Second, the details of an indoor self-localization and Kalman filter based navigation system for the SCAV are presented. Third, a formal definition of the collaborative multi-target tracking problem and a heuristic based networked solution are developed. Finally, the performance of the proposed tracking framework is evaluated on a laboratory test-bed of a fleet of SCAV vehicles. A detailed system characterization in terms localization, navigation, and collaborative tracking performance is performed on the SCAV test-bed. In addition to valuable implementation insights about the localization, navigation, filtering, and ad hoc networking processes, a number of interesting conclusions about the overall tracking system are presented.

  16. The Law of Attraction in Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Eunil Park

    2012-07-01

    Full Text Available Following the law of attraction in human‐human interaction, this paper examines the effects of a robot’s personality and a human’s personality in various human‐robot interactions. This study was conducted using robots that were programmed to mimic both extroverted and introverted personality types, as well as humans who were classified as having introverted, extroverted or intermediate personality types. Using a 3 x 2 between‐ subjects experiment with 120 participants, the results indicated that participants who interacted with a similar personality robot were more comfortable than those who engaged with a different personality robot. Yet, the evaluation of social presence presented an opposing result. Both the implications and limitations of the present study, as well as guidelines for future research, are discussed.

  17. Human Robotic Systems (HRS): National Robotics Initiative (NRI) & Robotics Technology Pipeline Element

    Data.gov (United States)

    National Aeronautics and Space Administration — During 2012, NASA funded 9 grants to research institutions and universities, after reviews by NSF panels and NASA robotics experts.  The 9 research grantees...

  18. Interaction Challenges in Human-Robot Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2005-01-01

    In January 2004, NASA established a new, long-term exploration program to fulfill the President's Vision for U.S. Space Exploration. The primary goal of this program is to establish a sustained human presence in space, beginning with robotic missions to the Moon in 2008, followed by extended human expeditions to the Moon as early as 2015. In addition, the program places significant emphasis on the development of joint human-robot systems. A key difference from previous exploration efforts is that future space exploration activities must be sustainable over the long-term. Experience with the space station has shown that cost pressures will keep astronaut teams small. Consequently, care must be taken to extend the effectiveness of these astronauts well beyond their individual human capacity. Thus, in order to reduce human workload, costs, and fatigue-driven error and risk, intelligent robots will have to be an integral part of mission design.

  19. A Guide for Developing Human-Robot Interaction Experiments in the Robotic Interactive Visualization and Experimentation Technology (RIVET) Simulation

    Science.gov (United States)

    2016-05-01

    camera control for remote exploration. In: Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems; 2004 Apr 24–29; Vienna...Austria. New York (NY): ACM ; c2004. p. 511–517. Kunkler K. The role of medical simulation: an overview. The International Journal of Medical Robotics and...Schreckenghost D. Survey of metrics for human-robot interaction. In: Proceedings of the 8th ACM /IEEE Human-Robot Interaction Conference; HRI 2013

  20. Human-Robot Interaction: Intention Recognition and Mutual Entrainment

    Science.gov (United States)

    2012-08-18

    robot force control,” Robotica , vol. 15, no. 05, pp. 473–482, 1997. [35] N. Jarrassé, J. Paik, V. Pasqui, and G. Morel, “How can human motion prediction...2009. [35] C.-L. Shih, J.W. Grizzle, and C. Chevallereau, “From Stable Walking to Steering of a 3D Bipedal Robot with Passive Point Feet,” Robotica

  1. Advancing the Strategic Messages Affecting Robot Trust Effect: The Dynamic of User- and Robot-Generated Content on Human-Robot Trust and Interaction Outcomes.

    Science.gov (United States)

    Liang, Yuhua Jake; Lee, Seungcheol Austin

    2016-09-01

    Human-robot interaction (HRI) will soon transform and shift the communication landscape such that people exchange messages with robots. However, successful HRI requires people to trust robots, and, in turn, the trust affects the interaction. Although prior research has examined the determinants of human-robot trust (HRT) during HRI, no research has examined the messages that people received before interacting with robots and their effect on HRT. We conceptualize these messages as SMART (Strategic Messages Affecting Robot Trust). Moreover, we posit that SMART can ultimately affect actual HRI outcomes (i.e., robot evaluations, robot credibility, participant mood) by affording the persuasive influences from user-generated content (UGC) on participatory Web sites. In Study 1, participants were assigned to one of two conditions (UGC/control) in an original experiment of HRT. Compared with the control (descriptive information only), results showed that UGC moderated the correlation between HRT and interaction outcomes in a positive direction (average Δr = +0.39) for robots as media and robots as tools. In Study 2, we explored the effect of robot-generated content but did not find similar moderation effects. These findings point to an important empirical potential to employ SMART in future robot deployment.

  2. Human-robot skills transfer interfaces for a flexible surgical robot.

    Science.gov (United States)

    Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G

    2014-09-01

    In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. A Social Cognitive Neuroscience Stance on Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Chaminade Thierry

    2011-12-01

    Full Text Available Robotic devices, thanks to the controlled variations in their appearance and behaviors, provide useful tools to test hypotheses pertaining to social interactions. These agents were used to investigate one theoretical framework, resonance, which is defined, at the behavioral and neural levels, as an overlap between first- and third- person representations of mental states such as motor intentions or emotions. Behaviorally, we found a reduced, but significant, resonance towards a humanoid robot displaying biological motion, compared to a human. Using neuroimaging, we've reported that while perceptual processes in the human occipital and temporal lobe are more strongly engaged when perceiving a humanoid robot than a human action, activity in areas involved in motor resonance depends on attentional modulation for artificial agent more strongly than for human agents. Altogether, these studies using artificial agents offer valuable insights into the interaction of bottom-up and top-down processes in the perception of artificial agents.

  4. Modelling and Control for Soft Finger Manipulation and Human-Robot Interaction

    OpenAIRE

    Ficuciello, Fanny

    2010-01-01

    One of the greatest challenges of humanoid robotics is to provide a robotic systems with autonomous and dextrous skills. Dextrous manipulation skills, for personal and service robots in unstructured environments, are of fundamental importance, in order to accomplish manipulation tasks in human-like ways and to realize a proper and safe cooperation between humans and robots. The contributions presented in this thesis are aimed at modeling and controlling multifingered robotic hands wit...

  5. Evidencing the "robot phase transition" in experimental human-algorithmic markets

    OpenAIRE

    Cartlidge, John; Cliff, Dave

    2013-01-01

    Johnson, Zhao, Hunsader, Meng, Ravindar, Carran, and Tivnan (2012) recently suggested the existence of a phase transition in the dynamics of financial markets in which there is free interaction between human traders and algorithmic trading systems ("robots"). Above a particular time-threshold, humans and robots trade with one another; below the threshold all transactions are robot-to-robot. We refer to this abrupt system transition as the "robot phase transition". Here, we conduct controlled ...

  6. Towards understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior

    Directory of Open Access Journals (Sweden)

    Stephen M. Fiore

    2013-11-01

    Full Text Available As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI. We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  7. 25th Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    Borangiu, Theodor

    2017-01-01

    This book presents the proceedings of the 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 held in Belgrade, Serbia, on June 30th–July 2nd, 2016. In keeping with the tradition of the event, RAAD 2016 covered all the important areas of research and innovation in new robot designs and intelligent robot control, with papers including Intelligent robot motion control; Robot vision and sensory processing; Novel design of robot manipulators and grippers; Robot applications in manufacturing and services; Autonomous systems, humanoid and walking robots; Human–robot interaction and collaboration; Cognitive robots and emotional intelligence; Medical, human-assistive robots and prosthetic design; Robots in construction and arts, and Evolution, education, legal and social issues of robotics. For the first time in RAAD history, the themes cloud robots, legal and ethical issues in robotics as well as robots in arts were included in the technical program. The book is a valuable resource f...

  8. Human interface, automatic planning, and control of a humanoid robot

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Y.K. [Korea Inst. of Science and Technology, Seoul (Korea, Republic of)]|[Sandia National Labs., Albuquerque, NM (United States); Kang, S.C.; Lee, S.; Cho, K.R.; Kim, H.S.; Lee, C.W. [Korea Inst. of Science and Technology, Seoul (Korea, Republic of); Park, S.M. [Jeonju Technical Coll. (Korea, Republic of)

    1998-11-01

    This paper presents an integrated robotic system consisting of human interfaces, motion- and grasp-planning algorithms, a controller, a graphical simulator, and a humanoid robot with over 60 joints. All of these subsystems are integrated in a coordinated fashion to enable the robot to perform a commanded task with as much autonomy as possible. The highest level of the system is the human interfaces, which enable a user to specify tasks conveniently and efficiently. At the mid-level, several planning algorithms generate motions of the robot body, arms, and hands automatically. At the lowest level, the motor controllers are equipped with both a position controller and a compliant motion controller to execute gross motions and contact motions, respectively. The main contributions of the work are the large-scale integration and the development of the motion planners for a humanoid robot. A hierarchical integration scheme that preserves the modularities of the human interfaces, the motion planners, and the controller has been the key for the successful integration. The set of motion planners is developed systematically so as to coordinate the motions of the body, arms, and hands to perform a large variety of tasks.

  9. Middleware Design for Swarm-Driving Robots Accompanying Humans.

    Science.gov (United States)

    Kim, Min Su; Kim, Sang Hyuck; Kang, Soon Ju

    2017-02-17

    Research on robots that accompany humans is being continuously studied. The Pet-Bot provides walking-assistance and object-carrying services without any specific controls through interaction between the robot and the human in real time. However, with Pet-Bot, there is a limit to the number of robots a user can use. If this limit is overcome, the Pet-Bot can provide services in more areas. Therefore, in this study, we propose a swarm-driving middleware design adopting the concept of a swarm, which provides effective parallel movement to allow multiple human-accompanying robots to accomplish a common purpose. The functions of middleware divide into three parts: a sequence manager for swarm process, a messaging manager, and a relative-location identification manager. This middleware processes the sequence of swarm-process of robots in the swarm through message exchanging using radio frequency (RF) communication of an IEEE 802.15.4 MAC protocol and manages an infrared (IR) communication module identifying relative location with IR signal strength. The swarm in this study is composed of the master interacting with the user and the slaves having no interaction with the user. This composition is intended to control the overall swarm in synchronization with the user activity, which is difficult to predict. We evaluate the accuracy of the relative-location estimation using IR communication, the response time of the slaves to a change in user activity, and the time to organize a network according to the number of slaves.

  10. Investigation of human-robot interface performance in household environments

    Science.gov (United States)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  11. Utilization of Human-Like Pelvic Rotation for Running Robot

    Directory of Open Access Journals (Sweden)

    Takuya eOtani

    2015-07-01

    Full Text Available The spring loaded inverted pendulum (SLIP is used to model human running. It is based on a characteristic feature of human running, in which the linear-spring-like motion of the standing leg is produced by the joint stiffness of the knee and ankle. Although this model is widely used in robotics, it does not include human-like pelvic motion. In this study, we show that the pelvis actually contributes to the increase in jumping force and absorption of landing impact. On the basis of this finding, we propose a new model, SLIP2 (spring loaded inverted pendulum with pelvis, to improve running in humanoid robots. The model is composed of a body mass, a pelvis, and leg springs, and, it can control its springs while running by use of pelvic movement in the frontal plane. To achieve running motions, we developed a running control system that includes a pelvic oscillation controller to attain control over jumping power and a landing placement controller to adjust the running speed. We also developed a new running robot by using the SLIP2 model and performed hopping and running experiments to evaluate the model. The developed robot could accomplish hopping motions only by pelvic movement. The results also established that the difference between the pelvic rotational phase and the oscillation phase of the vertical mass displacement affects the jumping force. In addition, the robot demonstrated the ability to run with a foot placement controller depending on the reference running speed.

  12. An Agent Driven Human-centric Interface for Autonomous Mobile Robots

    Science.gov (United States)

    2003-01-01

    An Agent Driven Human-centric Interface for Autonomous Mobile Robots Donald Sofge, Dennis Perzanowski, Magdalena Bugajska, William Adams...Human-centric, Multimodal, Dynamic Autonomy, CoABS Grid, Mobile Robots 1. INTRODUCTION One of the challenges in implementing dynamically...autonomous behaviors in mobile robots is achieving a truly human-centric interface so that human operators can interact with the robots as naturally as they

  13. Robotic Billiards: Understanding Humans in Order to Counter Them.

    Science.gov (United States)

    Nierhoff, Thomas; Leibrandt, Konrad; Lorenz, Tamara; Hirche, Sandra

    2016-08-01

    Ongoing technological advances in the areas of computation, sensing, and mechatronics enable robotic-based systems to interact with humans in the real world. To succeed against a human in a competitive scenario, a robot must anticipate the human behavior and include it in its own planning framework. Then it can predict the next human move and counter it accordingly, thus not only achieving overall better performance but also systematically exploiting the opponent's weak spots. Pool is used as a representative scenario to derive a model-based planning and control framework where not only the physics of the environment but also a model of the opponent is considered. By representing the game of pool as a Markov decision process and incorporating a model of the human decision-making based on studies, an optimized policy is derived. This enables the robot to include the opponent's typical game style into its tactical considerations when planning a stroke. The results are validated in simulations and real-life experiments with an anthropomorphic robot playing pool against a human.

  14. Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.

    Science.gov (United States)

    Yoo, Bum-Soo; Kim, Jong-Hwan

    2015-09-01

    During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.

  15. Experience in system design for human-robot teaming in urban search & rescue

    NARCIS (Netherlands)

    Kruijff, G.J.M.; Janíček, M.; Keshavdas, S.; Larochelle, B.; Zender, H.; Smets, N.J.J.M.; Mioch, T.; Neerincx, M.A.; Diggelen, J. van; Colas, F.; Liu, M.; Pomerleau, F.; Svoboda, T.; Petriček, T.; Pirri, F.; Giannni, M.; Papadakis, P.; Sinha, A.; Balmer, P.; Tomatis, N.; WOrst, R.; Linder, T.; Surmann, H.; Tretyakov, V.; Corrao, S.; Pratzler-Wanczura, S.; Sulk, M.

    2012-01-01

    The paper describes experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search & Rescue. A human-robot team consists of several robots (rovers/UGVs, microcopter/UAVs), several humans at an off-site command post (mission commander, UGV ope

  16. Experience in system design for human-robot teaming in urban search & rescue

    NARCIS (Netherlands)

    Kruijff, G.J.M.; Janíček, M.; Keshavdas, S.; Larochelle, B.; Zender, H.; Smets, N.J.J.M.; Mioch, T.; Neerincx, M.A.; Diggelen, J. van; Colas, F.; Liu, M.; Pomerleau, F.; Svoboda, T.; Petriček, T.; Pirri, F.; Giannni, M.; Papadakis, P.; Sinha, A.; Balmer, P.; Tomatis, N.; WOrst, R.; Linder, T.; Surmann, H.; Tretyakov, V.; Corrao, S.; Pratzler-Wanczura, S.; Sulk, M.

    2012-01-01

    The paper describes experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search & Rescue. A human-robot team consists of several robots (rovers/UGVs, microcopter/UAVs), several humans at an off-site command post (mission commander, UGV ope

  17. Experience in system design for human-robot teaming in urban search & rescue

    NARCIS (Netherlands)

    Kruijff, G.J.M.; Janíček, M.; Keshavdas, S.; Larochelle, B.; Zender, H.; Smets, N.J.J.M.; Mioch, T.; Neerincx, M.A.; Diggelen, J. van; Colas, F.; Liu, M.; Pomerleau, F.; Svoboda, T.; Petriček, T.; Pirri, F.; Giannni, M.; Papadakis, P.; Sinha, A.; Balmer, P.; Tomatis, N.; WOrst, R.; Linder, T.; Surmann, H.; Tretyakov, V.; Corrao, S.; Pratzler-Wanczura, S.; Sulk, M.

    2012-01-01

    The paper describes experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search & Rescue. A human-robot team consists of several robots (rovers/UGVs, microcopter/UAVs), several humans at an off-site command post (mission commander, UGV

  18. Investigation of the Impedance Characteristic of Human Arm for Development of Robots to Cooperate with Humans

    Science.gov (United States)

    Rahman, Md. Mozasser; Ikeura, Ryojun; Mizutani, Kazuki

    In the near future many aspects of our lives will be encompassed by tasks performed in cooperation with robots. The application of robots in home automation, agricultural production and medical operations etc. will be indispensable. As a result robots need to be made human-friendly and to execute tasks in cooperation with humans. Control systems for such robots should be designed to work imitating human characteristics. In this study, we have tried to achieve these goals by means of controlling a simple one degree-of-freedom cooperative robot. Firstly, the impedance characteristic of the human arm in a cooperative task is investigated. Then, this characteristic is implemented to control a robot in order to perform cooperative task with humans. A human followed the motion of an object, which is moved through desired trajectories. The motion is actuated by the linear motor of the one degree-of-freedom robot system. Trajectories used in the experiments of this method were minimum jerk (the rate of change of acceleration) trajectory, which was found during human and human cooperative task and optimum for muscle movement. As the muscle is mechanically analogous to a spring-damper system, a simple second-order equation is used as models for the arm dynamics. In the model, we considered mass, stiffness and damping factor. Impedance parameter is calculated from the position and force data obtained from the experiments and based on the “Estimation of Parametric Model”. Investigated impedance characteristic of human arm is then implemented to control a robot, which performed cooperative task with human. It is observed that the proposed control methodology has given human like movements to the robot for cooperating with human.

  19. Everyday robotic action: Lessons from human action control

    Directory of Open Access Journals (Sweden)

    Roy eDe Kleijn

    2014-03-01

    Full Text Available Robots are increasingly capable of performing everyday human activities such as cooking, cleaning, and doing the laundry. This requires the real-time planning and execution of complex, temporally-extended sequential actions under high degrees of uncertainty, which provides many challenges to traditional approaches to robot action control. We argue that important lessons in this respect can be learned from research on human action control. We provide a brief overview of available psychological insights into this issue and focus on four principles that we think could be particularly beneficial for robot control: the integration of symbolic and subsymbolic planning of action sequences, the integration of feedforward and feedback control, the clustering of complex actions into subcomponents, and the contextualization of action-control structures through goal representations.

  20. A multimodal emotion detection system during human-robot interaction.

    Science.gov (United States)

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F; Salichs, Miguel A

    2013-11-14

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human-robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human-robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.

  1. Integrating verbal and nonverbal communication in a dynamic neural field architecture for human-robot interaction

    Directory of Open Access Journals (Sweden)

    Estela Bicho

    2010-05-01

    Full Text Available How do humans coordinate their intentions, goals and motor behaviors when performing joint action tasks? Recent experimental evidence suggests that resonance processes in the observer's motor system are crucially involved in our ability to understand actions of others', to infer their goals and even to comprehend their action-related language. In this paper, we present a control architecture for human-robot collaboration that exploits this close perception-action linkage as a means to achieve more natural and efficient communication grounded in sensorimotor experiences. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of neural populations that encode in their activation patterns goals, actions and shared task knowledge. We validate the verbal and non-verbal communication skills of the robot in a joint assembly task in which the human-robot team has to construct toy objects from their components. The experiments focus on the robot’s capacity to anticipate the user’s needs and to detect and communicate unexpected events that may occur during joint task execution.

  2. Peer-to-Peer Human-Robot Interaction for Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2004-01-01

    NASA has embarked on a long-term program to develop human-robot systems for sustained, affordable space exploration. To support this mission, we are working to improve human-robot interaction and performance on planetary surfaces. Rather than building robots that function as glorified tools, our focus is to enable humans and robots to work as partners and peers. In this paper. we describe our approach, which includes contextual dialogue, cognitive modeling, and metrics-based field testing.

  3. Durable Tactile Glove for Human or Robot Hand

    Science.gov (United States)

    Butzer, Melissa; Diftler, Myron A.; Huber, Eric

    2010-01-01

    A glove containing force sensors has been built as a prototype of tactile sensor arrays to be worn on human hands and anthropomorphic robot hands. The force sensors of this glove are mounted inside, in protective pockets; as a result of this and other design features, the present glove is more durable than earlier models.

  4. Designing a Social Environment for Human-Robot Cooperation.

    Science.gov (United States)

    Amram, Fred M.

    Noting that work is partly a social activity, and that workers' psychological and emotional needs influence their productivity, this paper explores avenues for improving human-robot cooperation and for enhancing worker satisfaction in the environment of flexible automation. The first section of the paper offers a brief overview of the…

  5. Human-Robot Emergency Response - Experimental Platform and Preliminary Dataset

    Science.gov (United States)

    2014-07-28

    Proceedings of the IEEE International Conference on Robotics and Automation, Leuven, Belgium, May 16–21 1998, pp. 3715–3720. [13] itseez, “ Opencv ,” http...function and camshift function in OpenCV [13]. In each image obtained form cameras, we first calculate back projection of a histogram model of a human. In

  6. Human Factors and Robotics: Current Status and Future Prospects.

    Science.gov (United States)

    1981-10-01

    since they can be very important. For easy human information processing, ours is called the " simbiosis " model. (For- give the misspelling.) The nine...engineering investigation. In the " simbiosis " model, surveillance will consist of monitoring robots both directly and through computer-driven displays and

  7. Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Science.gov (United States)

    2016-09-09

    equal to the time before impact- ing the ship (i.e., ETAm − t − 1). Finally, a term (i.e., −MGg,m,t−1) disables the constraint for all t except for the...2016a. Decision-making authority, team efficiency and human worker satisfaction in mixed human- robot teams. In Proc. IJCAI. Gombolay, M.; Yang, X. J...Hayes, B.; Seo, N.; Liu, Z.; Wadhwania, S.; Yu, T.; Shah, N.; Golen, T.; and Shah, J. 2016b. Robotic assistance in coordination of patient care. In

  8. An Unmanned Aerial Vehicle as Human-Assistant Robotics System

    CERN Document Server

    Chingtham, Tejbanta Singh; Ghose, M K; 10.1109/ICCIC.2010.5705731

    2011-01-01

    According to the American Heritage Dictionary [1],Robotics is the science or study of the technology associated with the design, fabrication, theory, and application of Robots. The term Hoverbot is also often used to refer to sophisticated mechanical devices that are remotely controlled by human beings even though these devices are not autonomous. This paper describes a remotely controlled hoverbot by installing a transmitter and receiver on both sides that is the control computer (PC) and the hoverbot respectively. Data is transmitted as signal or instruction via a infrastructure network which is converted into a command for the hoverbot that operates at a remote site.

  9. Evolution of Neural Controllers for Robot Navigation in Human Environments

    Directory of Open Access Journals (Sweden)

    Genci Capi

    2010-01-01

    Full Text Available Problem statement: In this study, we presented a novel vision-based learning approach for autonomous robot navigation. Approach: In our method, we converted the captured image in a binary one, which after the partition is used as the input of the neural controller. Results: The neural control system, which maps the visual information to motor commands, is evolved online using real robots. Conclusion/Recommendations: We showed that evolved neural networks performed well in indoor human environments. Furthermore, we compared the performance of neural controllers with an algorithmic vision based control method.

  10. Robot Control Overview: An Industrial Perspective

    Directory of Open Access Journals (Sweden)

    T. Brogårdh

    2009-07-01

    Full Text Available One key competence for robot manufacturers is robot control, defined as all the technologies needed to control the electromechanical system of an industrial robot. By means of modeling, identification, optimization, and model-based control it is possible to reduce robot cost, increase robot performance, and solve requirements from new automation concepts and new application processes. Model-based control, including kinematics error compensation, optimal servo reference- and feed-forward generation, and servo design, tuning, and scheduling, has meant a breakthrough for the use of robots in industry. Relying on this breakthrough, new automation concepts such as high performance multi robot collaboration and human robot collaboration can be introduced. Robot manufacturers can build robots with more compliant components and mechanical structures without loosing performance and robots can be used also in applications with very high performance requirements, e.g., in assembly, machining, and laser cutting. In the future it is expected that the importance of sensor control will increase, both with respect to sensors in the robot structure to increase the control performance of the robot itself and sensors outside the robot related to the applications and the automation systems. In this connection sensor fusion and learning functionalities will be needed together with the robot control for easy and intuitive installation, programming, and maintenance of industrial robots.

  11. Human-Robot Interaction Literature Review

    Science.gov (United States)

    2012-03-01

    analyse documentaire porte sur 53 ouvrages et vise à donner un aperçu des recherches sur l’IHR concernant l’utilisation des UV qui sont actuellement en...Toronto CR2012- ; Defence R&S Canada – Toronto; March 2012. Mandat : Cette analyse documentaire a été effectuée pour le compte de RDDC-Toronto dans...faut améliorer l’interaction humain-robot (IHR) qui se produit lorsqu’un opérateur commande un UV. But : Cette analyse documentaire vise à présenter

  12. Mines and human casualties: a robotics approach toward mine clearing

    Science.gov (United States)

    Ghaffari, Masoud; Manthena, Dinesh; Ghaffari, Alireza; Hall, Ernest L.

    2004-10-01

    An estimated 100 million landmines which have been planted in more than 60 countries kill or maim thousands of civilians every year. Millions of people live in the vast dangerous areas and are not able to access to basic human services because of landmines" threats. This problem has affected many third world countries and poor nations which are not able to afford high cost solutions. This paper tries to present some experiences with the land mine victims and solutions for the mine clearing. It studies current situation of this crisis as well as state of the art robotics technology for the mine clearing. It also introduces a survey robot which is suitable for the mine clearing applications. The results show that in addition to technical aspects, this problem has many socio-economic issues. The significance of this study is to persuade robotics researchers toward this topic and to peruse the technical and humanitarian facets of this issue.

  13. Human-Robot Interface over the Web Based Intelligent System

    Directory of Open Access Journals (Sweden)

    Desa Hazry

    2006-01-01

    Full Text Available This research extends the capability for the new technology platform by Remote Data Inspection System (RDIS server from Furukawa Co., Ltd. Enabling the integration of standard Hypertext Markup Language (HTML programming and RDIS tag programming to create a user-friendly “point-and-click” web-based control mechanism. The integration allows the users to send commands to mobile robot over the Internet. Web-based control enables human to extend his action and intelligence to remote locations. Three mechanisms for web-based controls are developed: Manual remote control, continuous operation event and autonomous navigational control. In the manual remote control the user is fully responsible for the robot action and the robot do not use any sophisticated algorithms. The continuous operation event is the extension of the basic movement of a manual remote control mechanism. In the autonomous navigation control, the user has more flexibility to tell the robot to carry out specific tasks. Using this method, mobile robot can be controlled via the web, from any places connected to the network without constructing specific infrastructures for communication.

  14. Artificial companions: empathy and vulnerability mirroring in human-robot relations

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2010-01-01

    Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends (among other things) on the robot’s capacity to be a recipient of human empathy, and tha

  15. How humans behave and evaluate a social robot in real-environment settings

    NARCIS (Netherlands)

    Niculescu, A.I.; van Dijk, Elisabeth M.A.G.; Nijholt, Antinus; See, Swan Lan; Li, Haizhou; Brinkman, W.P.; Neerincx, M.

    Behavioral analysis has proven to be an important method to study human-robot interaction in real-life environments providing highly relevant insights for developing new theoretical and practical models of appropriate social robot design. In this paper we describe our approach to study human-robot

  16. Sensing human hand motions for controlling dexterous robots

    Science.gov (United States)

    Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.

    1988-01-01

    The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.

  17. Human Systems Engineering: A Leadership Model for Collaboration and Change.

    Science.gov (United States)

    Clark, Karen L.

    Human systems engineering (HSE) was created to introduce a new way of viewing collaboration. HSE emphasizes the role of leaders who welcome risk, commit to achieving positive change, and help others achieve change. The principles of HSE and its successful application to the collaborative process were illustrated through a case study representing a…

  18. Trust and Trustworthiness in Human-Robot Interaction: A Formal Conceptualization

    Science.gov (United States)

    2016-05-11

    A., and Wagner, A. R., “Overtrust of Robots in Emergency Evacuation Scenarios”, ACM /IEEE International Conference on Human-Robot Interaction (HRI...failures and feedback on real-time trust," in Proceedings of the 8th ACM /IEEE international conference on Human-robot interaction, Tokyo, Japan, 2013...Robot Interaction (HRI), 2012 7th ACM /IEEE International Conference on, Boston, 2012. [17] A. R. Wagner, The Role of Trust and Relationships in

  19. "It's not my fault!": Investigating the effects of the deceptive behaviour of a humanoid robot

    NARCIS (Netherlands)

    Wijnen, L.; Coenen, J.; Grzyb, B.J.

    2017-01-01

    We investigated the effects of the deceptive behaviour of a robot, hypothesising that a lying robot would be perceived as more intelligent and human-like, but less trust-worthy than a non-lying robot. The participants engaged in a collaborative task with the non-lying and lying humanoid robot NAO.

  20. Human-Robot Interaction Reconfigurable Test Environment: Optimizing the Human Interface Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Human-Robot Interaction Reconfigurable Test Environment (HRI-RTE) integrates a grid-based, reconfigurable test arena and an operator workstation with...

  1. Middleware Design for Swarm-Driving Robots Accompanying Humans

    Science.gov (United States)

    Kim, Min Su; Kim, Sang Hyuck; Kang, Soon Ju

    2017-01-01

    Research on robots that accompany humans is being continuously studied. The Pet-Bot provides walking-assistance and object-carrying services without any specific controls through interaction between the robot and the human in real time. However, with Pet-Bot, there is a limit to the number of robots a user can use. If this limit is overcome, the Pet-Bot can provide services in more areas. Therefore, in this study, we propose a swarm-driving middleware design adopting the concept of a swarm, which provides effective parallel movement to allow multiple human-accompanying robots to accomplish a common purpose. The functions of middleware divide into three parts: a sequence manager for swarm process, a messaging manager, and a relative-location identification manager. This middleware processes the sequence of swarm-process of robots in the swarm through message exchanging using radio frequency (RF) communication of an IEEE 802.15.4 MAC protocol and manages an infrared (IR) communication module identifying relative location with IR signal strength. The swarm in this study is composed of the master interacting with the user and the slaves having no interaction with the user. This composition is intended to control the overall swarm in synchronization with the user activity, which is difficult to predict. We evaluate the accuracy of the relative-location estimation using IR communication, the response time of the slaves to a change in user activity, and the time to organize a network according to the number of slaves. PMID:28218650

  2. Adaptive Human-Aware Robot Navigation in Close Proximity to Humans

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2011-01-01

    For robots to be able coexist with people in future everyday human environments, they must be able to act in a safe, natural and comfortable way. This work addresses the motion of a mobile robot in an environment, where humans potentially want to interact with it. The designed system consists...... system that uses a potential field to derive motion that respects the personʹs social zones and perceived interest in interaction. The operation of the system is evaluated in a controlled scenario in an open hall environment. It is demonstrated that the robot is able to learn to estimate if a person...

  3. Collaborative Assembly Operation between Two Modular Robots Based on the Optical Position Feedback

    Directory of Open Access Journals (Sweden)

    Liying Su

    2009-01-01

    Full Text Available This paper studies the cooperation between two master-slave modular robots. A cooperative robot system is set up with two modular robots and a dynamic optical meter-Optotrak. With Optotrak, the positions of the end effectors are measured as the optical position feedback, which is used to adjust the robots' end positions. A tri-layered motion controller is designed for the two cooperative robots. The RMRC control method is adopted to adjust the master robot to the desired position. With the kinematics constraints of the two robots including position and pose, joint velocity, and acceleration constraints, the two robots can cooperate well. A bolt and nut assembly experiment is executed to verify the methods.

  4. Advanced Technologies for Robotic Exploration Leading to Human Exploration: Results from the SpaceOps 2015 Workshop

    Science.gov (United States)

    Lupisella, Mark L.; Mueller, Thomas

    2016-01-01

    This paper will provide a summary and analysis of the SpaceOps 2015 Workshop all-day session on "Advanced Technologies for Robotic Exploration, Leading to Human Exploration", held at Fucino Space Center, Italy on June 12th, 2015. The session was primarily intended to explore how robotic missions and robotics technologies more generally can help lead to human exploration missions. The session included a wide range of presentations that were roughly grouped into (1) broader background, conceptual, and high-level operations concepts presentations such as the International Space Exploration Coordination Group Roadmap, followed by (2) more detailed narrower presentations such as rover autonomy and communications. The broader presentations helped to provide context and specific technical hooks, and helped lay a foundation for the narrower presentations on more specific challenges and technologies, as well as for the discussion that followed. The discussion that followed the presentations touched on key questions, themes, actions and potential international collaboration opportunities. Some of the themes that were touched on were (1) multi-agent systems, (2) decentralized command and control, (3) autonomy, (4) low-latency teleoperations, (5) science operations, (6) communications, (7) technology pull vs. technology push, and (8) the roles and challenges of operations in early human architecture and mission concept formulation. A number of potential action items resulted from the workshop session, including: (1) using CCSDS as a further collaboration mechanism for human mission operations, (2) making further contact with subject matter experts, (3) initiating informal collaborative efforts to allow for rapid and efficient implementation, and (4) exploring how SpaceOps can support collaboration and information exchange with human exploration efforts. This paper will summarize the session and provide an overview of the above subjects as they emerged from the SpaceOps 2015

  5. Automation and Robotics for Human Mars Exploration (AROMA)

    Science.gov (United States)

    Hofmann, Peter; von Richter, Andreas

    2003-01-01

    Automation and Robotics (A&R) systems are a key technology for Mars exploration. All over the world initiatives in this field aim at developing new A&R systems and technologies for planetary surface exploration. From December 2000 to February 2002 Kayser-Threde GmbH, Munich, Germany lead a study called AROMA (Automation and Robotics for Human Mars Exploration) under ESA contract in order to define a reference architecture of A&R elements in support of a human Mars exploration program. One of the goals of this effort is to initiate new developments and to maintain the competitiveness of European industry within this field. c2003 Published by Elsevier Science Ltd.

  6. Integrated Network Architecture for Sustained Human and Robotic Exploration

    Science.gov (United States)

    Noreen, Gary; Cesarone, Robert; Deutsch, Leslie; Edwards, Charles; Soloff, Jason; Ely, Todd; Cook, Brian; Morabito, David; Hemmati, Hamid; Piazolla, Sabino; hide

    2005-01-01

    The National Aeronautics and Space Administration (NASA) Exploration Systems Enterprise is planning a series of human and robotic missions to the Earth's moon and to Mars. These missions will require communication and navigation services. This paper1 sets forth presumed requirements for such services and concepts for lunar and Mars telecommunications network architectures to satisfy the presumed requirements. The paper suggests that an inexpensive ground network would suffice for missions to the near-side of the moon. A constellation of three Lunar Telecommunications Orbiters connected to an inexpensive ground network could provide continuous redundant links to a polar lunar base and its vicinity. For human and robotic missions to Mars, a pair of areostationary satellites could provide continuous redundant links between Earth and a mid-latitude Mars base in conjunction with the Deep Space Network augmented by large arrays of 12-m antennas on Earth.

  7. Movement Coordination in Human-Robot Teams: A Dynamical Systems Approach

    OpenAIRE

    Iqbal, Tariq; Rack, Samantha; Riek, Laurel D.

    2016-01-01

    In order to be effective teammates, robots need to be able to understand high-level human behavior to recognize, anticipate, and adapt to human motion. We have designed a new approach to enable robots to perceive human group motion in real-time, anticipate future actions, and synthesize their own motion accordingly. We explore this within the context of joint action, where humans and robots move together synchronously. In this paper, we present an anticipation method which takes high-level gr...

  8. A Dual Launch Robotic and Human Lunar Mission Architecture

    Science.gov (United States)

    Jones, David L.; Mulqueen, Jack; Percy, Tom; Griffin, Brand; Smitherman, David

    2010-01-01

    This paper describes a comprehensive lunar exploration architecture developed by Marshall Space Flight Center's Advanced Concepts Office that features a science-based surface exploration strategy and a transportation architecture that uses two launches of a heavy lift launch vehicle to deliver human and robotic mission systems to the moon. The principal advantage of the dual launch lunar mission strategy is the reduced cost and risk resulting from the development of just one launch vehicle system. The dual launch lunar mission architecture may also enhance opportunities for commercial and international partnerships by using expendable launch vehicle services for robotic missions or development of surface exploration elements. Furthermore, this architecture is particularly suited to the integration of robotic and human exploration to maximize science return. For surface operations, an innovative dual-mode rover is presented that is capable of performing robotic science exploration as well as transporting human crew conducting surface exploration. The dual-mode rover can be deployed to the lunar surface to perform precursor science activities, collect samples, scout potential crew landing sites, and meet the crew at a designated landing site. With this approach, the crew is able to evaluate the robotically collected samples to select the best samples for return to Earth to maximize the scientific value. The rovers can continue robotic exploration after the crew leaves the lunar surface. The transportation system for the dual launch mission architecture uses a lunar-orbit-rendezvous strategy. Two heavy lift launch vehicles depart from Earth within a six hour period to transport the lunar lander and crew elements separately to lunar orbit. In lunar orbit, the crew transfer vehicle docks with the lander and the crew boards the lander for descent to the surface. After the surface mission, the crew returns to the orbiting transfer vehicle for the return to the Earth. This

  9. Ontological Reasoning for Human-Robot Teaming in Search and Rescue Missions

    NARCIS (Netherlands)

    Bagosi, T.; Hindriks, k.V.; Neerincx, M.A.

    2016-01-01

    In search and rescue missions robots are used to help rescue workers in exploring the disaster site. Our research focuses on how multiple robots and rescuers act as a team, and build up situation awareness. We propose a multi-agent system where each agent supports one member, either human or robot.

  10. [Robotics].

    Science.gov (United States)

    Bier, J

    2000-05-01

    Content of this paper is the current state of the art of robots in surgery and the ongoing work on the field of surgical robotics at the Clinic for Maxillofacial Surgery at the Charité. Robots in surgery allows the surgeon to transform the accuracy of the imaging systems directly during the intervention and to plan an intervention beforehand. In this paper firstly the state of the art is described. Subsequently the scientific work at the clinic is described in detail. The paper closes with a outlook for future applications of robotics systems in maxillofacial surgery.

  11. Human-robot trust. Is motion fluency an effective behavioral style for regulating robot trustworthiness?

    NARCIS (Netherlands)

    Ligthart, M.; Brule, R. van den; Haselager, W.F.G.

    2013-01-01

    Finding good behavioral styles to express robot trustworthiness will optimize the usage of robots. In previous research, motion fluency as behavioral style was studied. Smooth robot motions were compared with trembling robot motions. In a video experiment an effect of motion fluency on trust was fou

  12. The relation between people's attitudes and anxiety towards robots in human-robot interaction

    NARCIS (Netherlands)

    Graaf, de M.M.A.; Ben Allouch, S.

    2013-01-01

    This paper examines the relation between an interaction with a robot and peoples’ attitudes and emotion towards robots. In our study, participants have had an acquaintance talk with a social robot and both their general attitude and anxiety towards social robots were measured before and after the in

  13. Robotics for recombinant DNA and human genetics research

    Energy Technology Data Exchange (ETDEWEB)

    Beugelsdijk, T.J.

    1990-01-01

    In October of 1989, molecular biologists throughout the world formally embarked on ultimately determining the set of genetic instructions for a human being. Called by some the Manhattan Project'' a molecular biology, pursuit of this goal is projected to require approximately 3000 man years of effort over a 15-year period. The Humane Genome Initiative is a worldwide research effort that has the goal of analyzing the structure of human deoxyribonucleic acid (DNA) and determining the location of all human genes. The Department of Energy (DOE) has designated three of its national laboratories as centers for the Human Genome Project. These are Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Lawrence Berkeley Laboratory (LBL). These laboratories are currently working on different, but complementary technology development areas in support of the Human Genome Project. The robotics group at LANL is currently working at developing the technologies that address the problems associated with physical mapping. This article describes some of these problems and discusses some of the robotics approaches and engineering tolls applicable to their solution. 7 refs., 8 figs., 1 tab.

  14. Path Planning and Trajectory Control of Collaborative Mobile Robots Using Hybrid Control Architecture

    Directory of Open Access Journals (Sweden)

    Trevor Davies

    2008-08-01

    Full Text Available This paper presents the development and implementation a hybrid control architecture to direct a collective of three X80 mobile robots to multiple user-defined waypoints. The Genetic Algorithm Path Planner created an optimized, reduction in the time to complete the task, path plan for each robot in the collective such that each waypoint was visited once without colliding with a priori obstacles. The deliberative Genetic Algorithm Path Planner was then coupled with a reactive Potential Field Trajectory Planner and kinematic based controller to create a hybrid control architecture allowing the mobile robot to navigate between multiple user-defined waypoints, while avoiding a priori obstacles and obstacles detected using the robots' range sensors. The success of this hybrid control architecture was proven through simulation and experimentation using three of Dr. Robot's ™ wireless X80 mobile robots.

  15. Human factors assessments of innovative technologies: Robotics sector

    Energy Technology Data Exchange (ETDEWEB)

    Moran, J.B. [Operating Engineers National Hazmat Program, Beaver, WV (United States)

    1997-12-01

    The U.S. Department of Energy (DOE) has funded major environmental technology developments over the past several years. One area that received significant attention is robotics, which has resulted in the development of a wide range of unique robotic systems tailored to the many tasks unique to the DOE complex. These systems are often used in highly hazardous environments, which reduces or eliminates worker exposures. The DOE, concurrent with the technology development initiative, also established and funded a 5-yr cooperative agreement intended to interface with the technology development community-with specific attention to the occupational safety and health aspects associated with individual technologies through human factors and hazard assessments. This program is now in its third year.

  16. Robotic magnetic navigation for ablation of human arrhythmias

    Science.gov (United States)

    Da Costa, Antoine; Guichard, Jean Baptiste; Roméyer-Bouchard, Cécile; Gerbay, Antoine; Isaaz, Karl

    2016-01-01

    Radiofrequency treatment represents the first choice of treatment for arrhythmias, in particular complex arrhythmias and especially atrial fibrillation, due to the greater benefit/risk ratio compared to antiarrhythmic drugs. However, complex arrhythmias such as atrial fibrillation require long procedures with additional risks such as X-ray exposure or serious complications such as tamponade. Given this context, the treatment of arrhythmias using robotic magnetic navigation entails a technique well suited to complex arrhythmias on account of its efficacy, reliability, significant reduction in X-ray exposure for both patient and operator, as well as a very low risk of perforation. As ongoing developments will likely improve results and procedure times, this technology will become one of the most modern technologies for treating arrhythmias. Based on the literature, this review summarizes the advantages and limitations of robotic magnetic navigation for ablation of human arrhythmias. PMID:27698569

  17. Human-Tracking Strategies for a Six-legged Rescue Robot Based on Distance and View

    Institute of Scientific and Technical Information of China (English)

    PAN Yang; GAO Feng; QI Chenkun; CHAI Xun

    2016-01-01

    Human tracking is an important issue for intelligent robotic control and can be used in many scenarios, such as robotic services and human-robot cooperation. Most of current human-tracking methods are targeted for mobile/tracked robots, but few of them can be used for legged robots. Two novel human-tracking strategies, view priority strategy and distance priority strategy, are proposed specially for legged robots, which enable them to track humans in various complex terrains. View priority strategy focuses on keeping humans in its view angle arrange with priority, while its counterpart, distance priority strategy, focuses on keeping human at a reasonable distance with priority. To evaluate these strategies, two indexes(average and minimum tracking capability) are defined. With the help of these indexes, the view priority strategy shows advantages compared with distance priority strategy. The optimization is done in terms of these indexes, which let the robot has maximum tracking capability. The simulation results show that the robot can track humans with different curves like square, circular, sine and screw paths. Two novel control strategies are proposed which specially concerning legged robot characteristics to solve human tracking problems more efficiently in rescue circumstances.

  18. A Study of an EMG-Based Exoskeletal Robot for Human Shoulder Motion Support

    Science.gov (United States)

    Kiguchi, Kazuo; Iwami, Koya; Watanabe, Keigo; Fukuda, Toshio

    We have been developing exoskeletal robots in order to realize the human motion support (especially for physically weak people). In this paper, we propose a 2-DOF exoskeletal robot and its method of control to support the human shoulder motion. In this exoskeletal robot, the flexion-extension and abduction-adduction motions of the shoulder are supported by activating the arm holder of the robot, which is atached to the upper arm of the human subject, using wires driven by DC motors. A fuzzy-neuro controller is designed to control the robot according to the skin surface electromyogram(EMG) signals in which the intention of the human subject is reflected. The proposed controller controls the flexion-extension and abduction-adduction motion of the human subject. The effectiveness of the proposed exoskeletal robot has been evaluated experimentally.

  19. Anticipating Human Activities Using Object Affordances for Reactive Robotic Response.

    Science.gov (United States)

    Koppula, Hema S; Saxena, Ashutosh

    2016-01-01

    An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses.

  20. Behavioral Entropy in Human-Robot Interaction

    Science.gov (United States)

    2004-01-01

    Department 2 Erwin R. Boer Consulting Brigham Young University San Diego, CA, USA Provo, UT, USA ABSTRACT The ability to quickly and accurately measure...AND ADDRESS(ES) Brigham Young University,Computer Science Department,33361 Talmage Building,Provo,UT,84602 8. PERFORMING ORGANIZATION REPORT NUMBER...in- teraction. To paraphrase Wiener, people work to re- duce entropy so skilled behavior minimizes entropy. This manifests itself in human behavior

  1. Modeling Mixed Groups of Humans and Robots with Reflexive Game Theory

    Science.gov (United States)

    Tarasenko, Sergey

    The Reflexive Game Theory is based on decision-making principles similar to the ones used by humans. This theory considers groups of subjects and allows to predict which action from the set each subject in the group will choose. It is possible to influence subject's decision in a way that he will make a particular choice. The purpose of this study is to illustrate how robots can refrain humans from risky actions. To determine the risky actions, the Asimov's Three Laws of robotics are employed. By fusing the RGT's power to convince humans on the mental level with Asimov's Laws' safety, we illustrate how robots in the mixed groups of humans and robots can influence on human subjects in order to refrain humans from risky actions. We suggest that this fusion has a potential to device human-like motor behaving and looking robots with the human-like decision-making algorithms.

  2. Motor contagion during human-human and human-robot interaction.

    Directory of Open Access Journals (Sweden)

    Ambra Bisio

    Full Text Available Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot. After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  3. Motor contagion during human-human and human-robot interaction.

    Science.gov (United States)

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  4. Action and language integration: from humans to cognitive robots.

    Science.gov (United States)

    Borghi, Anna M; Cangelosi, Angelo

    2014-07-01

    The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction.

  5. Proactive human-computer collaboration for information discovery

    Science.gov (United States)

    DiBona, Phil; Shilliday, Andrew; Barry, Kevin

    2016-05-01

    Lockheed Martin Advanced Technology Laboratories (LM ATL) is researching methods, representations, and processes for human/autonomy collaboration to scale analysis and hypotheses substantiation for intelligence analysts. This research establishes a machinereadable hypothesis representation that is commonsensical to the human analyst. The representation unifies context between the human and computer, enabling autonomy in the form of analytic software, to support the analyst through proactively acquiring, assessing, and organizing high-value information that is needed to inform and substantiate hypotheses.

  6. The roles of humans and robots in exploring the solar system

    Science.gov (United States)

    Mendell, W. W.

    2004-07-01

    Historically, advocates of solar system exploration have disagreed over whether program goals could be entirely satisfied by robotic missions. Scientists tend to argue that robotic exploration is most cost-effective. However, the human space program has a great deal of support in the general public, thereby enabling the scientific element of exploration to be larger than it might be as a stand-alone activity. A comprehensive strategy of exploration needs a strong robotic component complementing and supporting human missions. Robots are needed for precursor missions, for crew support on planetary surfaces, and for probing dangerous environments. Robotic field assistants can provide mobility, access to scientific sites, data acquisition, visualization of the environment, precision operations, sample acquisition and analysis, and expertise to human explorers. As long as space exploration depends on public funds, space exploration must include an appropriate mix of human and robotic activity.

  7. Reflex control of robotic gait using human walking data.

    Directory of Open Access Journals (Sweden)

    Catherine A Macleod

    Full Text Available Control of human walking is not thoroughly understood, which has implications in developing suitable strategies for the retraining of a functional gait following neurological injuries such as spinal cord injury (SCI. Bipedal robots allow us to investigate simple elements of the complex nervous system to quantify their contribution to motor control. RunBot is a bipedal robot which operates through reflexes without using central pattern generators or trajectory planning algorithms. Ground contact information from the feet is used to activate motors in the legs, generating a gait cycle visually similar to that of humans. Rather than developing a more complicated biologically realistic neural system to control the robot's stepping, we have instead further simplified our model by measuring the correlation between heel contact and leg muscle activity (EMG in human subjects during walking and from this data created filter functions transferring the sensory data into motor actions. Adaptive filtering was used to identify the unknown transfer functions which translate the contact information into muscle activation signals. Our results show a causal relationship between ground contact information from the heel and EMG, which allows us to create a minimal, linear, analogue control system for controlling walking. The derived transfer functions were applied to RunBot II as a proof of concept. The gait cycle produced was stable and controlled, which is a positive indication that the transfer functions have potential for use in the control of assistive devices for the retraining of an efficient and effective gait with potential applications in SCI rehabilitation.

  8. Study on a human guidance method for autonomous cruise of indoor robot

    Science.gov (United States)

    Jia, Bao-Zhi; Zhu, Ming

    2011-12-01

    This paper describes a method of human guidance for autonomous cruise of indoor robot. A low-cost robot follows a person in a room and notes the path for autonomous cruise using its monocular vision. A method of video-based object detection and tracking is taken to detect the target by the video received from the robot's camera. The validity of the human guidance method is proved by the experiment.

  9. Human Robotic Swarm Interaction Using An Artificial Physics Approach (Briefing Charts)

    Science.gov (United States)

    2014-12-01

    Human Robotic Swarm Interaction Using An Artificial Physics Approach LT Brenton Campbell ADVISORS: Asst Professor Dr. Timothy Chung Senior Lecturer...REPORT DATE DEC 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Human Robotic Swarm Interaction Using An...and R. Heil, “Distributed, physics-based control of swarms of vehicles,” Autonomous Robots, pp. 137–162, 2004. [Online]. Available: http

  10. Becoming Earth Independent: Human-Automation-Robotics Integration Challenges for Future Space Exploration

    Science.gov (United States)

    Marquez, Jessica J.

    2016-01-01

    Future exploration missions will require NASA to integrate more automation and robotics in order to accomplish mission objectives. This presentation will describe on the future challenges facing the human operator (astronaut, ground controllers) as we increase the amount of automation and robotics in spaceflight operations. It will describe how future exploration missions will have to adapt and evolve in order to deal with more complex missions and communication latencies. This presentation will outline future human-automation-robotic integration challenges.

  11. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    Science.gov (United States)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  12. Glazed panel construction with human-robot cooperation

    CERN Document Server

    Lee, Seungyeol

    2011-01-01

    These days, construction companies are beginning to be concerned about a potential labor shortage by demographic changes and an aging construction work force. Also, an improvement in construction safety could not only reduce accidents but also decrease the cost of the construction, and is therefore one of the imperative goals of the construction industry. These challenges correspond to the potential for Automation and Robotics in Construction as one of solutions. Almost half of construction work is said to be material handling and materials used for construction are heavy and bulky for humans.

  13. Mini AERCam Inspection Robot for Human Space Missions

    Science.gov (United States)

    Fredrickson, Steven E.; Duran, Steve; Mitchell, Jennifer D.

    2004-01-01

    The Engineering Directorate of NASA Johnson Space Center has developed a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam free flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35 pound, 14 inch AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations including automatic stationkeeping and point-to-point maneuvering. Mini AERCam is designed to fulfill the unique requirements and constraints associated with using a free flyer to perform external inspections and remote viewing of human spacecraft operations. This paper describes the application of Mini AERCam for stand-alone spacecraft inspection, as well as for roles on teams of humans and robots conducting future space exploration missions.

  14. Teleoperation of a robot manipulator from 3D human hand-arm motion

    Science.gov (United States)

    Kofman, Jonathan; Verma, Siddharth; Wu, Xianghai; Luu, Timothy

    2003-10-01

    The control of a robot manipulator by a human operator is often necessary in unstructured dynamic environments with unfamiliar objects. Remote teleoperation is required when human presence at the robot site is undesirable or difficult, such as in handling hazardous materials and operating in dangerous or inaccessible environments. Previous approaches have employed mechanical or other contacting interfaces which require unnatural motions for object manipulation tasks or hinder dexterous human motion. This paper presents a non-contacting method of teleoperating a robot manipulator by having the human operator perform the 3D human hand-arm motion that would naturally be used to compete an object manipulation task and tracking the motion with a stereo-camera system at a local site. The 3D human hand-arm motion is reconstructed at the remote robot site and is used to control the position and orientation of the robot manipulator end-effector in real-time. Images captured of the robot interacting with objects at the remote site provide visual feedback to the human operator. Tests in teleoperation of the robot manipulator have demonstrated the ability of the human to carry out object manipulator tasks remotely and the teleoperated robot manipulator system to copy human-arm motions in real-time.

  15. Human Collaborative Localization and Mapping in Indoor Environments with Non-Continuous Stereo

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    A new approach to the monocular simultaneous localization and mapping (SLAM) problem is presented in this work. Data obtained from additional bearing-only sensors deployed as wearable devices is fully fused into an Extended Kalman Filter (EKF). The wearable device is introduced in the context of a collaborative task within a human-robot interaction (HRI) paradigm, including the SLAM problem. Thus, based on the delayed inverse-depth feature initialization (DI-D) SLAM, data from the camera deployed on the human, capturing his/her field of view, is used to enhance the depth estimation of the robotic monocular sensor which maps and locates the device. The occurrence of overlapping between the views of both cameras is predicted through geometrical modelling, activating a pseudo-stereo methodology which allows to instantly measure the depth by stochastic triangulation of matched points found through SIFT/SURF. Experimental validation is provided through results from experiments, where real data is captured as synchronized sequences of video and other data (relative pose of secondary camera) and processed off-line. The sequences capture indoor trajectories representing the main challenges for a monocular SLAM approach, namely, singular trajectories and close turns with high angular velocities with respect to linear velocities. PMID:26927100

  16. Visual exploration and analysis of human-robot interaction rules

    Science.gov (United States)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming

  17. Exploring cultural factors in human-robot interaction: A matter of personality?

    NARCIS (Netherlands)

    Weiss, Astrid; Evers, Vanessa

    2011-01-01

    This paper proposes an experimental study to investigate task-dependence and cultural-background dependence of the personality trait attribution on humanoid robots. In Human-Robot Interaction, as well as in Human-Agent Interaction research, the attribution of personality traits towards intelligent a

  18. Exploring cultural factors in human-robot interaction: A matter of personality?

    NARCIS (Netherlands)

    Weiss, Astrid; Evers, Vanessa

    2011-01-01

    This paper proposes an experimental study to investigate task-dependence and cultural-background dependence of the personality trait attribution on humanoid robots. In Human-Robot Interaction, as well as in Human-Agent Interaction research, the attribution of personality traits towards intelligent a

  19. The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving

    Science.gov (United States)

    Okita, Sandra

    2015-01-01

    Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…

  20. Expert System Design Aid for Applications of Human Factors in Robotics.

    Science.gov (United States)

    1986-06-12

    Publishers B.V., Amsterdam, 1984. Parsons, H.M., ERGONOMIE ET ROBOTIQUE (Human Factors and Robotics), Paper presented at The International Conference on...Amsterdam, 1984. Parsons, H.M., ERGONOMIE ET ROBOTIQUE (Human Factors and Robotics), Paper presented at The International Conference on Occupational

  1. Moving NASA Beyond Low Earth Orbit: Future Human-Automation-Robotic Integration Challenges

    Science.gov (United States)

    Marquez, Jessica

    2016-01-01

    This presentation will provide an overview of current human spaceflight operations. It will also describe how future exploration missions will have to adapt and evolve in order to deal with more complex missions and communication latencies. Additionally, there are many implications regarding advanced automation and robotics, and this presentation will outline future human-automation-robotic integration challenges.

  2. Training Humanities Doctoral Students in Collaborative and Digital Multimedia

    Science.gov (United States)

    Ensslin, Astrid; Slocombe, Will

    2012-01-01

    This study reports on the pedagogic rationale, didactic design and implications of an AHRC-funded doctoral training scheme in collaborative and digital multimedia in the humanities. In the second part of this article we discuss three areas of provision that were identified as particularly significant and/or controversial. These include (1) desktop…

  3. Collaboration in the Humanities, Arts and Social Sciences in Australia

    Science.gov (United States)

    Haddow, Gaby; Xia, Jianhong; Willson, Michele

    2017-01-01

    This paper reports on the first large-scale quantitative investigation into collaboration, demonstrated in co-authorship, by Australian humanities, arts and social sciences (HASS) researchers. Web of Science data were extracted for Australian HASS publications, with a focus on the softer social sciences, over the period 2004-2013. The findings…

  4. INTEGRATED ROBOT-HUMAN CONTROL IN MINING OPERATIONS

    Energy Technology Data Exchange (ETDEWEB)

    George Danko

    2006-04-01

    This report describes the results of the 2nd year of a research project on the implementation of a novel human-robot control system for hydraulic machinery. Sensor and valve re-calibration experiments were conducted to improve open loop machine control. A Cartesian control example was tested both in simulation and on the machine; the results are discussed in detail. The machine tests included open-loop as well as closed-loop motion control. Both methods worked reasonably well, due to the high-quality electro-hydraulic valves used on the experimental machine. Experiments on 3-D analysis of the bucket trajectory using marker tracking software are also presented with the results obtained. Open-loop control is robustly stable and free of short-term dynamic problems, but it allows for drifting away from the desired motion kinematics of the machine. A novel, closed-loop control adjustment provides a remedy, while retaining much of the advantages of the open-loop control based on kinematics transformation. Additional analysis of previously recorded, three-dimensional working trajectories of the bucket of large mine shovels was completed. The motion patterns, when transformed into a family of curves, serve as the basis for software-controlled machine kinematics transformation in the new human-robot control system.

  5. Strategies for human-driven robot comprehension of spatial descriptions by older adults in a robot fetch task.

    Science.gov (United States)

    Carlson, Laura; Skubic, Marjorie; Miller, Jared; Huo, Zhiyu; Alexenko, Tatiana

    2014-07-01

    This contribution presents a corpus of spatial descriptions and describes the development of a human-driven spatial language robot system for their comprehension. The domain of application is an eldercare setting in which an assistive robot is asked to "fetch" an object for an elderly resident based on a natural language spatial description given by the resident. In Part One, we describe a corpus of naturally occurring descriptions elicited from a group of older adults within a virtual 3D home that simulates the eldercare setting. We contrast descriptions elicited when participants offered descriptions to a human versus robot avatar, and under instructions to tell the addressee how to find the target versus where the target is. We summarize the key features of the spatial descriptions, including their dynamic versus static nature and the perspective adopted by the speaker. In Part Two, we discuss critical cognitive and perceptual processing capabilities necessary for the robot to establish a common ground with the human user and perform the "fetch" task. Based on the collected corpus, we focus here on resolving the perspective ambiguity and recognizing furniture items used as landmarks in the descriptions. Taken together, the work presented here offers the key building blocks of a robust system that takes as input natural spatial language descriptions and produces commands that drive the robot to successfully fetch objects within our eldercare scenario.

  6. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Directory of Open Access Journals (Sweden)

    Kristel Knaepen

    Full Text Available In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support. Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  7. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Science.gov (United States)

    Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain

    2015-01-01

    In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  8. I want what you've got: Cross platform portabiity and human-robot interaction assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer

    2005-08-01

    Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot development environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.

  9. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Thomas Mergner

    2017-04-01

    Full Text Available Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary” movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  10. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  11. Establishing human situation awareness using a multi-modal operator control unit in an urban search & rescue human-robot team

    NARCIS (Netherlands)

    Larochelle, B.; Kruijff, G.J.M.; Smets, N.; Mioch, T.; Groenewegen, P.

    2011-01-01

    Early on in a disaster it is crucial for humans to make an assessment of the situation, to help determine further action. Robots can potentially assist humans here, particularly when the hotzone is too dangerous for humans. Crucial in this human-robot team effort is that the system of robot and mean

  12. Artificial nociception and motor responses to pain, for humans and robots.

    Science.gov (United States)

    Bagnato, Carlo; Takagi, Atsushi; Burdet, Etienne

    2015-01-01

    This concept paper describes nociception and the role of pain in humans. Understanding the mechanisms of pain can give insight into the implementation of artificial pain for robots. Identification of noxious contacts could help robots to elicit reactions in order to avoid or minimize damage to the robot and the environment. The information processing of artificial pain can also be used to optimally regulate incoming sensory information and prevent accidents or real pain to the users of robotic systems and prostheses, improving the performance of robots and their interaction with human users. Besides the applications of artificial nociception for robotic manipulation and intelligent prostheses, the development of computational models of pain mechanisms for the discrimination of noxious stimuli from innocuous touch can find crucial clinical applications, addressing the vulnerable non-verbal population who are unable to report pain.

  13. Modeling and Simulation for Exploring Human-Robot Team Interaction Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dudenhoeffer, Donald Dean; Bruemmer, David Jonathon; Davis, Midge Lee

    2001-12-01

    Small-sized and micro-robots will soon be available for deployment in large-scale forces. Consequently, the ability of a human operator to coordinate and interact with largescale robotic forces is of great interest. This paper describes the ways in which modeling and simulation have been used to explore new possibilities for human-robot interaction. The paper also discusses how these explorations have fed implementation of a unified set of command and control concepts for robotic force deployment. Modeling and simulation can play a major role in fielding robot teams in actual missions. While live testing is preferred, limitations in terms of technology, cost, and time often prohibit extensive experimentation with physical multi-robot systems. Simulation provides insight, focuses efforts, eliminates large areas of the possible solution space, and increases the quality of actual testing.

  14. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    Science.gov (United States)

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  15. Decision-Theoretical Navigation of Service Robots Using POMDPs with Human-Robot Co-Occurrence Prediction

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2013-02-01

    Full Text Available To improve the natural human‐avoidance skills of service robots, a human motion predictive navigation method is proposed, namely PN‐POMDP. A human‐robot motion co‐occurrence estimation algorithm is proposed which incorporates long‐term and short‐term human motion prediction. To improve the reliability of probabilistic and predictive navigation, the POMDP model is utilized to generate navigation control policies through theoretically optimal decisions. A layered motion control structure is proposed that combines global path planning and reactive avoidance. Multiple comity policies are integrated with a decision‐making module that generates efficient and human‐compliant navigational behaviours for robots. Experimental results illustrate the effectiveness and reliability of the predictive navigation method.

  16. Unfolding large-scale online collaborative human dynamics

    CERN Document Server

    Zha, Yilong; Zhou, Changsong

    2015-01-01

    Large-scale interacting human activities underlie all social and economic phenomena, but quantitative understanding of regular patterns and mechanism is very challenging and still rare. Self-organized online collaborative activities with precise record of event timing provide unprecedented opportunity. Our empirical analysis of the history of millions of updates in Wikipedia shows a universal double power-law distribution of time intervals between consecutive updates of an article. We then propose a generic model to unfold collaborative human activities into three modules: (i) individual behavior characterized by Poissonian initiation of an action, (ii) human interaction captured by a cascading response to others with a power-law waiting time, and (iii) population growth due to increasing number of interacting individuals. This unfolding allows us to obtain analytical formula that is fully supported by the universal patterns in empirical data. Our modeling approaches reveal "simplicity" beyond complex interac...

  17. Emulating human leg impairments and disabilities in walking with humanoid robots

    OpenAIRE

    Lengagne, Sébastien; Kheddar, Abderrahmane; Druon, Sébastien; Yoshida, Eiichi

    2011-01-01

    International audience; In this paper, we present a method for emulating human walking motions with leg impairments or disabilities using humanoid robots. Our optimal dynamic multi-contact motion software generates the emulated motions. We take into account the full-body dynamic model of the robot and consider possible leg impairments as additional physical constraints in the optimization problem. The proposed approach is verified using HRP-2 humanoid robot. Simulations and experiments reveal...

  18. 24th International Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    2016-01-01

    This volume includes the Proceedings of the 24th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2015, which was held in Bucharest, Romania, on May 27-29, 2015. The Conference brought together academic and industry researchers in robotics from the 11 countries affiliated to the Alpe-Adria-Danube space: Austria, Croatia, Czech Republic, Germany, Greece, Hungary, Italy, Romania, Serbia, Slovakia and Slovenia, and their worldwide partners. According to its tradition, RAAD 2015 covered all important areas of research, development and innovation in robotics, including new trends such as: bio-inspired and cognitive robots, visual servoing of robot motion, human-robot interaction, and personal robots for ambient assisted living. The accepted papers have been grouped in nine sessions: Robot integration in industrial applications; Grasping analysis, dexterous grippers and component design; Advanced robot motion control; Robot vision and sensory control; Human-robot interaction and collaboration;...

  19. A journey from robot to digital human mathematical principles and applications with MATLAB programming

    CERN Document Server

    Gu, Edward Y L

    2013-01-01

    This book provides readers with a solid set of diversified and essential tools for the theoretical modeling and control of complex robotic systems, as well as for digital human modeling and realistic motion generation. Following a comprehensive introduction to the fundamentals of robotic kinematics, dynamics and control systems design, the author extends robotic modeling procedures and motion algorithms to a much higher-dimensional, larger scale and more sophisticated research area, namely digital human modeling. Most of the methods are illustrated by MATLAB™ codes and sample graphical visualizations, offering a unique closed loop between conceptual understanding and visualization. Readers are guided through practicing and creating 3D graphics for robot arms as well as digital human models in MATLAB™, and through driving them for real-time animation. This work is intended to serve as a robotics textbook with an extension to digital human modeling for senior undergraduate and graduate engineering students....

  20. Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center.

    Science.gov (United States)

    Casper, J; Murphy, R R

    2003-01-01

    The World Trade Center (WTC) rescue response provided an unfortunate opportunity to study the human-robot interactions (HRI) during a real unstaged rescue for the first time. A post-hoc analysis was performed on the data collected during the response, which resulted in 17 findings on the impact of the environment and conditions on the HRI: the skills displayed and needed by robots and humans, the details of the Urban Search and Rescue (USAR) task, the social informatics in the USAR domain, and what information is communicated at what time. The results of this work impact the field of robotics by providing a case study for HRI in USAR drawn from an unstaged USAR effort. Eleven recommendations are made based on the findings that impact the robotics, computer science, engineering, psychology, and rescue fields. These recommendations call for group organization and user confidence studies, more research into perceptual and assistive interfaces, and formal models of the state of the robot, state of the world, and information as to what has been observed.

  1. Collaboration

    Science.gov (United States)

    King, Michelle L.

    2010-01-01

    This article explores collaboration between library media educators and regular classroom teachers. The article focuses on the context of the issue, positions on the issue, the impact of collaboration, and how to implement effective collaboration into the school system. Various books and professional journals are used to support conclusions…

  2. Swarm Robot Control for Human Services and Moving Rehabilitation by Sensor Fusion

    Directory of Open Access Journals (Sweden)

    Tresna Dewi

    2014-01-01

    Full Text Available A current trend in robotics is fusing different types of sensors having different characteristics to improve the performance of a robot system and also benefit from the reduced cost of sensors. One type of robot that requires sensor fusion for its application is the service robot. To achieve better performance, several service robots are preferred to work together, and, hence, this paper concentrates on swarm service robots. Swarm service mobile robots operating within a fixed area need to cope with dynamic changes in the environment, and they must also be capable of avoiding dynamic and static obstacles. This study applies sensor fusion and swarm concept for service mobile robots in human services and rehabilitation environment. The swarm robots follow the human moving trajectory to provide support to human moving and perform several tasks required in their living environment. This study applies a reference control and proportional-integral (PI control for the obstacle avoidance function. Various computer simulations are performed to verify the effectiveness of the proposed method.

  3. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  4. Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.

    Science.gov (United States)

    Borenstein, Jason; Arkin, Ron

    2016-02-01

    Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to "nudge" their human users in the direction of being "more ethical". More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture "socially just" tendencies in their human counterparts. Designing technological artifacts in such a way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation.

  5. Embedded human control of robots using myoelectric interfaces.

    Science.gov (United States)

    Antuvan, Chris Wilson; Ison, Mark; Artemiadis, Panagiotis

    2014-07-01

    Myoelectric controlled interfaces have become a research interest for use in advanced prostheses, exoskeletons, and robot teleoperation. Current research focuses on improving a user's initial performance, either by training a decoding function for a specific user or implementing "intuitive" mapping functions as decoders. However, both approaches are limiting, with the former being subject specific, and the latter task specific. This paper proposes a paradigm shift on myoelectric interfaces by embedding the human as controller of the system to be operated. Using abstract mapping functions between myoelectric activity and control actions for a task, this study shows that human subjects are able to control an artificial system with increasing efficiency by just learning how to control it. The method efficacy is tested by using two different control tasks and four different abstract mappings relating upper limb muscle activity to control actions for those tasks. The results show that all subjects were able to learn the mappings and improve their performance over time. More interestingly, a chronological evaluation across trials reveals that the learning curves transfer across subsequent trials having the same mapping, independent of the tasks to be executed. This implies that new muscle synergies are developed and refined relative to the mapping used by the control task, suggesting that maximal performance may be achieved by learning a constant, arbitrary mapping function rather than dynamic subject- or task-specific functions. Moreover, the results indicate that the method may extend to the neural control of any device or robot, without limitations for anthropomorphism or human-related counterparts.

  6. Model reference adaptive impedance control for physical human-robot interaction

    Institute of Scientific and Technical Information of China (English)

    Bakur ALQAUDI; Hamidreza MODARES; Isura RANATUNGA; Shaikh M TOUSIF; Frank L LEWIS; Dan O POPA

    2016-01-01

    This paper presents a novel enhanced human-robot interaction system based on model reference adaptive control. The presented method delivers guaranteed stability and task performance and has two control loops. A robot-specific inner loop, which is a neuroadaptive controller, learns the robot dynamics online and makes the robot respond like a prescribed impedance model. This loop uses no task information, including no prescribed trajectory. A task-specific outer loop takes into account the human operator dynamics and adapts the prescribed robot impedance model so that the combined human-robot system has desirable characteristics for task performance. This design is based on model reference adaptive control, but of a nonstandard form. The net result is a controller with both adaptive impedance characteristics and assistive inputs that augment the human operator to provide improved task performance of the human-robot team. Simulations verify the performance of the proposed controller in a repetitive point-to-point motion task. Actual experimental implementations on a PR2 robot further corroborate the effectiveness of the approach.

  7. Evaluation of Human and AutomationRobotics Integration Needs for Future Human Exploration Missions

    Science.gov (United States)

    Marquez, Jessica J.; Adelstein, Bernard D.; Ellis, Stephen; Chang, Mai Lee; Howard, Robert

    2016-01-01

    NASA employs Design Reference Missions (DRMs) to define potential architectures for future human exploration missions to deep space, the Moon, and Mars. While DRMs to these destinations share some components, each mission has different needs. This paper focuses on the human and automation/robotic integration needs for these future missions, evaluating them with respect to NASA research gaps in the area of space human factors engineering. The outcomes of our assessment is a human and automation/robotic (HAR) task list for each of the four DRMs that we reviewed (i.e., Deep Space Sortie, Lunar Visit/Habitation, Deep Space Habitation, and Planetary), a list of common critical HAR factors that drive HAR design.

  8. Framework for Human-Automation Collaboration: Conclusions from Four Studies

    Energy Technology Data Exchange (ETDEWEB)

    Johanna Oxstrand; Katya L. Le Blanc; John O' Hara; Jeffrey C. Joe; April M. Whaley; Heather Medema

    2013-11-01

    The Human Automation Collaboration (HAC) research project is investigating how advanced technologies that are planned for Advanced Small Modular Reactors (AdvSMR) will affect the performance and the reliability of the plant from a human factors and human performance perspective. The HAC research effort investigates the consequences of allocating functions between the operators and automated systems. More specifically, the research team is addressing how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. Oxstrand et al. (2013 - March) describes the efforts conducted by the researchers to identify the research needs for HAC. The research team reviewed the literature on HAC, developed a model of HAC, and identified gaps in the existing knowledge of human-automation collaboration. As described in Oxstrand et al. (2013 – June), the team then prioritized the research topics identified based on the specific needs in the context of AdvSMR. The prioritization was based on two sources of input: 1) The preliminary functions and tasks, and 2) The model of HAC. As a result, three analytical studies were planned and conduced; 1) Models of Teamwork, 2) Standardized HAC Performance Measurement Battery, and 3) Initiators and Triggering Conditions for Adaptive Automation. Additionally, one field study was also conducted at Idaho Falls Power.

  9. STATICS ANALYSIS AND OPENGL BASED 3D SIMULATION OF COLLABORATIVE RECONFIGURABLE PLANETARY ROBOTS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In the last few years,based on the wide rangeof application domains,multirobot coordination be-came one of the focuses of robotics.M.B.Dias andA.Stentz[1]described a market-based architecturefor coordinating a group of robots to achieve a givenobjective.Terry Huntsberger,Paolo Pirjanian,Pir-janian,Ashitey Trebi-Ollennu,etc,have developedan enabling distributed control architecture calledcontrol architecture for multirobot planetary out-posts(CAMPOUT)[2].CAMPOUT includes thenecessary group behaviors and comm...

  10. Self-organizing method for collaboration in multi-robot system on basis of balance principle

    Institute of Scientific and Technical Information of China (English)

    Yangbin DONA; Jinping JIANG; Yan HE

    2008-01-01

    By analyzing the operation characteristics of two subtasks that have resource dependency on each other, this paper demonstrates the impact of progress relation between the two subtasks on the whole task's progress, and then puts forward a self-organizing prin-ciple called balance principle that keeps the individual profit between robots equal. Furthermore, an algorithm is designed for adjusting subtask selection on the basis of this principle. Simulation shows the validity of the algorithm on self-organizing task allocation in a multi-robot system.

  11. About the Evolution of the Human Species: Human Robots and Human Enhancement

    Directory of Open Access Journals (Sweden)

    Loredana TEREC-VLAD

    2014-09-01

    Full Text Available Science and technology have made huge progress, enhancing the human species in the evolution process. The topics related to change, knowledge and new technologies implemented by the individual were considered taboo over 30 years ago; however, nowadays the focus is increasingly laid on the human condition and welfare. Considered to be the latest trend in contemporary philosophy, transhumanism has faced plenty of criticism in terms of human enhancement. It is not surprising that some researchers in the field believe that the access to information and power can lead to a new totalitarian system; therefore, they are sceptical regarding the fact that the society tacitly accepts the invasion of robotics in everyday life. Within this paper we plan to emphasize the fact that human enhancement is not only about human welfare but, on the contrary, scientific information can cause serious harm to the society and the human species, given that the access to information and new technologies may entail consequences such as the division of the society into: inferior species and superior species. On the other hand, despite the positive aspects of the scientific discoveries, they also have hidden sides, which I plan to analyze throughout this paper.

  12. Collaborative Work and the Future of Humanities Teaching

    Directory of Open Access Journals (Sweden)

    Michael Ullyot

    2016-12-01

    Full Text Available This article explores the degree to which student collaborations on research and writing assignments can effectively realize learning outcomes. The assignment, in this case, encouraged students to contribute discrete parts of a research project in order to develop their complementary abilities: researching, consulting, drafting, and revising. The outcomes for students included appreciation for their individual expertise, and experience combining discrete contributions into a result that surpasses the sum of its parts. In the course, we gave students preliminary guidance for establishing team objectives and roles for the duration of this assignment and asked them to evaluate their learning experience at the end. In this paper, we analyze the students’ quantitative and qualitative feedback, and suggest ways to structure and supervise collaborative assignments so that students develop their expertise and complementary skills. We suggest that collaborative work such as this is essential for advanced undergraduates in the humanities, where collaborations are less common than in other disciplines. Moreover, we conclude that future humanities instructors should be open to the benefits of collaborative research and writing. This article will be of interest to instructors who wish to develop collaborative assignments that improve students’ disciplinary expertise, engagement with course materials, and outreach to audiences beyond the academy. Cet article explore la mesure dans laquelle le travail en collaboration des étudiants en matière de recherche et de rédaction de devoirs peut aboutir à des résultats d’apprentissage efficaces. Dans le cas présent, le devoir demandé devait encourager les étudiants à contribuer à des sections distinctes d’un projet de recherche afin de développer leurs compétences complémentaires : mener à bien la recherche, consulter, préparer un brouillon et réviser. Pour les étudiants, les r

  13. Advances in Robotic, Human, and Autonomous Systems for Missions of Space Exploration

    Science.gov (United States)

    Gross, Anthony R.; Briggs, Geoffrey A.; Glass, Brian J.; Pedersen, Liam; Kortenkamp, David M.; Wettergreen, David S.; Nourbakhsh, I.; Clancy, Daniel J.; Zornetzer, Steven (Technical Monitor)

    2002-01-01

    Space exploration missions are evolving toward more complex architectures involving more capable robotic systems, new levels of human and robotic interaction, and increasingly autonomous systems. How this evolving mix of advanced capabilities will be utilized in the design of new missions is a subject of much current interest. Cost and risk constraints also play a key role in the development of new missions, resulting in a complex interplay of a broad range of factors in the mission development and planning of new missions. This paper will discuss how human, robotic, and autonomous systems could be used in advanced space exploration missions. In particular, a recently completed survey of the state of the art and the potential future of robotic systems, as well as new experiments utilizing human and robotic approaches will be described. Finally, there will be a discussion of how best to utilize these various approaches for meeting space exploration goals.

  14. Trajectory Planning for Robots in Dynamic Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Bak, Thomas; Andersen, Hans Jørgen

    2010-01-01

    This paper present a trajectory planning algorithm for a robot operating in dynamic human environments. Environments such as pedestrian streets, hospital corridors and train stations. We formulate the problem as planning a minimal cost trajectory through a potential field, defined from...... the perceived position and motion of persons in the environment. A Rapidly-exploring Random Tree (RRT) algorithm is proposed as a solution to the planning problem. A new method for selecting the best trajectory in the RRT, according to the cost of traversing a potential field, is presented. The RRT expansion...... vertex to the tree. Instead of executing a whole trajectory, when planned, the algorithm uses an Model Predictive Control (MPC) approach, where only a short segment of the trajectory is executed while a new iteration of the RRT is done. The planning algorithm is demonstrated in a simulated pedestrian...

  15. Human-robot cooperative movement training: learning a novel sensory motor transformation during walking with robotic assistance-as-needed.

    Science.gov (United States)

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-03-28

    A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a

  16. Robots for Astrobiology!

    Science.gov (United States)

    Boston, Penelope J.

    2016-01-01

    The search for life and its study is known as astrobiology. Conducting that search on other planets in our Solar System is a major goal of NASA and other space agencies, and a driving passion of the community of scientists and engineers around the world. We practice for that search in many ways, from exploring and studying extreme environments on Earth, to developing robots to go to other planets and help us look for any possible life that may be there or may have been there in the past. The unique challenges of space exploration make collaborations between robots and humans essential. The products of those collaborations will be novel and driven by the features of wholly new environments. For space and planetary environments that are intolerable for humans or where humans present an unacceptable risk to possible biologically sensitive sites, autonomous robots or telepresence offer excellent choices. The search for life signs on Mars fits within this category, especially in advance of human landed missions there, but also as assistants and tools once humans reach the Red Planet. For planetary destinations where we do not envision humans ever going in person, like bitterly cold icy moons, or ocean worlds with thick ice roofs that essentially make them planetary-sized ice caves, we will rely on robots alone to visit those environments for us and enable us to explore and understand any life that we may find there. Current generation robots are not quite ready for some of the tasks that we need them to do, so there are many opportunities for roboticists of the future to advance novel types of mobility, autonomy, and bio-inspired robotic designs to help us accomplish our astrobiological goals. We see an exciting partnership between robotics and astrobiology continually strengthening as we jointly pursue the quest to find extraterrestrial life.

  17. Perception and estimation challenges for humanoid robotics: DARPA Robotics Challenge and NASA Valkyrie

    Science.gov (United States)

    Fallon, Maurice

    2016-10-01

    This paper describes ongoing work at the University of Edinburgh's Humanoid Robotics Project. University of Edinburgh have formed a collaboration with the United States' National Aeronautics and Space Administration (NASA) around their R5 humanoid robot commonly known as Valkyrie. Also involved are MIT, Northeastern University and the Florida Institute for Human and Machine Cognition (IHMC) as part of NASA's Space Robotics Challenge. We will outline the development of state estimation and localization algorithms being developed for Valkyrie.

  18. Human-friendly robotic manipulators: safety and performance issues in controller design

    NARCIS (Netherlands)

    Tadele, Tadele Shiferaw

    2014-01-01

    Recent advances in robotics have spurred its adoption into new application areas such as medical, rescue, transportation, logistics, personal care and entertainment. In the personal care domain, robots are expected to operate in human-present environments and provide non-critical assistance. Success

  19. Human-friendly robotic manipulators: safety and performance issues in controller design

    NARCIS (Netherlands)

    Tadele, T.S.

    2014-01-01

    Recent advances in robotics have spurred its adoption into new application areas such as medical, rescue, transportation, logistics, personal care and entertainment. In the personal care domain, robots are expected to operate in human-present environments and provide non-critical assistance. Success

  20. Children Perseverate to a Human's Actions but Not to a Robot's Actions

    Science.gov (United States)

    Moriguchi, Yusuke; Kanda, Takayuki; Ishiguro, Hiroshi; Itakura, Shoji

    2010-01-01

    Previous research has shown that young children commit perseverative errors from their observation of another person's actions. The present study examined how social observation would lead children to perseverative tendencies, using a robot. In Experiment 1, preschoolers watched either a human model or a robot sorting cards according to one…

  1. Making planned paths look more human-like in humanoid robot manipulation planning

    DEFF Research Database (Denmark)

    Zacharias, F.; Schlette, C.; Schmidt, F.

    2011-01-01

    It contradicts the human's expectations when humanoid robots move awkwardly during manipulation tasks. The unnatural motion may be caused by awkward start or goal configurations or by probabilistic path planning processes that are often used. This paper shows that the choice of an arm's target...... for the robot arm....

  2. Children Perseverate to a Human's Actions but Not to a Robot's Actions

    Science.gov (United States)

    Moriguchi, Yusuke; Kanda, Takayuki; Ishiguro, Hiroshi; Itakura, Shoji

    2010-01-01

    Previous research has shown that young children commit perseverative errors from their observation of another person's actions. The present study examined how social observation would lead children to perseverative tendencies, using a robot. In Experiment 1, preschoolers watched either a human model or a robot sorting cards according to one…

  3. Dragons, Ladybugs, and Softballs: Girls' STEM Engagement with Human-Centered Robotics

    Science.gov (United States)

    Gomoll, Andrea; Hmelo-Silver, Cindy E.; Šabanovic, Selma; Francisco, Matthew

    2016-01-01

    Early experiences in science, technology, engineering, and math (STEM) are important for getting youth interested in STEM fields, particularly for girls. Here, we explore how an after-school robotics club can provide informal STEM experiences that inspire students to engage with STEM in the future. Human-centered robotics, with its emphasis on the…

  4. Dragons, Ladybugs, and Softballs: Girls' STEM Engagement with Human-Centered Robotics

    Science.gov (United States)

    Gomoll, Andrea; Hmelo-Silver, Cindy E.; Šabanovic, Selma; Francisco, Matthew

    2016-01-01

    Early experiences in science, technology, engineering, and math (STEM) are important for getting youth interested in STEM fields, particularly for girls. Here, we explore how an after-school robotics club can provide informal STEM experiences that inspire students to engage with STEM in the future. Human-centered robotics, with its emphasis on the…

  5. Adaptation of human routines to support a robot's tasks planning and scheduling

    Science.gov (United States)

    Tikanmäki, Antti; Feliu, Sandra T.; Röning, Juha

    2013-12-01

    Service robots usually share their workspace with people. Typically, a robot's tasks require knowing when and where people are, to be able to schedule requested tasks. Therefore, there exists the need to take into account the presence of humans when planning their actions and it is indispensable to have knowledge of robots' environments. It means in practice knowing when (time and events duration) and where (in workspace) a robot's tasks can be performed. This research paper takes steps towards obtaining of the spatial information required to execute software to plan tasks to be performed by a robot. With this aim, a program capable to define meaningful areas or zones in the robot workspace by the use of a clustering is created tied with statistically reasoned time slots when to perform each task. The software is tested using real data obtained from different cameras located along the corridors of CSE Department of University of Oulu.

  6. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    Science.gov (United States)

    2015-08-14

    independently evolving research directions based on physics -based models of mechanical, electromechanical and electronic devices, operational constraints...interactions between independently evolving research directions based on physics -based models of mechanical, electromechanical and electronic devices...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views

  7. Collaboration by Design: Using Robotics to Foster Social Interaction in Kindergarten

    Science.gov (United States)

    Lee, Kenneth T. H.; Sullivan, Amanda; Bers, Marina U.

    2013-01-01

    Research shows the importance of social interaction between peers in child development. Although technology can foster peer interactions, teachers often struggle with teaching with technology. This study examined a sample of (n = 19) children participating in a kindergarten robotics summer workshop to determine the effect of teaching using a…

  8. Collaboration by Design: Using Robotics to Foster Social Interaction in Kindergarten

    Science.gov (United States)

    Lee, Kenneth T. H.; Sullivan, Amanda; Bers, Marina U.

    2013-01-01

    Research shows the importance of social interaction between peers in child development. Although technology can foster peer interactions, teachers often struggle with teaching with technology. This study examined a sample of (n = 19) children participating in a kindergarten robotics summer workshop to determine the effect of teaching using a…

  9. How to Build a Robot: Collaborating to Strengthen STEM Programming in a Citywide System

    Science.gov (United States)

    Groome, Meghan; Rodríguez, Linda M.

    2014-01-01

    You have to stick with it. It takes time, patience, trial and error, failure, and persistence. It is almost never perfect or finished, but, with a good team, you can build something that works. These are the lessons youth learn when building a robot, as many do in the out-of-school time (OST) programs supported by the initiative described in this…

  10. Improving Collaborative Play between Children with Autism Spectrum Disorders and Their Siblings: The Effectiveness of a Robot-Mediated Intervention Based on Lego® Therapy

    Science.gov (United States)

    Huskens, Bibi; Palmen, Annemiek; Van der Werff, Marije; Lourens, Tino; Barakova, Emilia

    2015-01-01

    The aim of the study was to investigate the effectiveness of a brief robot-mediated intervention based on Lego® therapy on improving collaborative behaviors (i.e., interaction initiations, responses, and play together) between children with ASD and their siblings during play sessions, in a therapeutic setting. A concurrent multiple baseline design…

  11. Improving collaborative play between children with autism spectrum disorders and their siblings: The effectiveness of a robot-mediated intervention based on Lego® therapy

    NARCIS (Netherlands)

    Huskens, B.E.B.M.; Palmen, A.M.J.W.; Werff, M. van der; Lourens, T.; Barakova, E.I.

    2015-01-01

    The aim of the study was to investigate the effectiveness of a brief robot-mediated intervention based on Lego® therapy on improving collaborative behaviors (i.e., interaction initiations, responses, and play together) between children with ASD and their siblings during play sessions, in a therapeut

  12. Vocal Interactivity in-and-between Humans, Animals and Robots

    Directory of Open Access Journals (Sweden)

    Roger K Moore

    2016-10-01

    Full Text Available Almost all animals exploit vocal signals for a range of ecologically-motivated purposes: detecting predators prey and marking territory, expressing emotions, establishing social relations and sharing information. Whether it is a bird raising an alarm, a whale calling to potential partners,a dog responding to human commands, a parent reading a story with a child, or a business-person accessing stock prices using emph{Siri}, vocalisation provides a valuable communication channel through which behaviour may be coordinated and controlled, and information may be distributed and acquired.Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human-machine interaction. Opportunities for cross-fertilisation between these fields abound; for example, using artificial cognitive agents to investigate contemporary theories of language grounding, using machine learning to analyse different habitats or adding vocal expressivity to the next generation of language-enabled autonomous social agents. However, much of the research is conducted within well-defined disciplinary boundaries, and many fundamental issues remain. This paper attempts to redress the balance by presenting a comparative review of vocal interaction within-and-between humans, animals and artificial agents (such as robots, and it identifies a rich set of open research questions that may benefit from an inter-disciplinary analysis.

  13. Human-robot interaction tests on a novel robot for gait assistance.

    Science.gov (United States)

    Tagliamonte, Nevio Luigi; Sergi, Fabrizio; Carpino, Giorgio; Accoto, Dino; Guglielmelli, Eugenio

    2013-06-01

    This paper presents tests on a treadmill-based non-anthropomorphic wearable robot assisting hip and knee flexion/extension movements using compliant actuation. Validation experiments were performed on the actuators and on the robot, with specific focus on the evaluation of intrinsic backdrivability and of assistance capability. Tests on a young healthy subject were conducted. In the case of robot completely unpowered, maximum backdriving torques were found to be in the order of 10 Nm due to the robot design features (reduced swinging masses; low intrinsic mechanical impedance and high-efficiency reduction gears for the actuators). Assistance tests demonstrated that the robot can deliver torques attracting the subject towards a predicted kinematic status.

  14. Human-Robot Interface Controller Usability for Mission Planning on the Move

    Science.gov (United States)

    2012-11-01

    sponsored by the Robotics Collaboration Army Technology Objective (RCATO) and the High-Definition Cognition (HD-COG) in Operational Environments Army...FORT KNOX KY 40121 1 ARMY RSCH LABORATORY – HRED AWC FIELD ELEMENT RDRL HRM DJ D DURBIN BLDG 4506 ( DCD ) RM 107 FORT RUCKER AL

  15. Learning collaborative teamwork: an argument for incorporating the humanities.

    Science.gov (United States)

    Hall, Pippa; Brajtman, Susan; Weaver, Lynda; Grassau, Pamela Anne; Varpio, Lara

    2014-11-01

    A holistic, collaborative interprofessional team approach, which includes patients and families as significant decision-making members, has been proposed to address the increasing burden being placed on the health-care system. This project hypothesized that learning activities related to the humanities during clinical placements could enhance interprofessional teamwork. Through an interprofessional team of faculty, clinical staff, students, and patient representatives, we developed and piloted the self-learning module, "interprofessional education for collaborative person-centred practice through the humanities". The module was designed to provide learners from different professions and educational levels with a clinical placement/residency experience that would enable them, through a lens of the humanities, to better understand interprofessional collaborative person-centred care without structured interprofessional placement activities. Learners reported the self-paced and self-directed module to be a satisfactory learning experience in all four areas of care at our institution, and certain attitudes and knowledge were significantly and positively affected. The module's evaluation resulted in a revised edition providing improved structure and instruction for students with no experience in self-directed learning. The module was recently adapted into an interactive bilingual (French and English) online e-learning module to facilitate its integration into the pre-licensure curriculum at colleges and universities.

  16. Ghost-in-the-Machine reveals human social signals for human-robot interaction.

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P

    2015-01-01

    We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.

  17. Ghost-in-the-Machine Reveals Human Social Signals for Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Sebastian eLoth

    2015-11-01

    Full Text Available We used a new method called Ghost-in-the-Machine (GiM to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognise the intentions of customers on the basis of the output of the robotic recognisers. Specifically, we measured which recogniser modalities (e.g., speech, the distance to the bar were relevant at different stages of the interaction. This provided insights into human social behaviour necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognisers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognisers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.

  18. When in Rome: the role of culture & context in adherence to robot recommendations

    NARCIS (Netherlands)

    Wang, L.; Rau, P.-L.P.; Evers, V.; Robinson, B.K.; Hinds, P.

    2010-01-01

    In this study, we sought to clarify the effects of users' cultural background and cultural context on human-robot team collaboration by investigating attitudes toward and the extent to which people changed their decisions based on the recommendations of a robot collaborator. We report the results of

  19. When in Rome: the role of culture & context in adherence to robot recommendations

    NARCIS (Netherlands)

    Wang, L.; Rau, P.-L.P.; Evers, V.; Robinson, B.K.; Hinds, P.

    2010-01-01

    In this study, we sought to clarify the effects of users' cultural background and cultural context on human-robot team collaboration by investigating attitudes toward and the extent to which people changed their decisions based on the recommendations of a robot collaborator. We report the results of

  20. Human-robot-interaction control for orthoses with pneumatic soft-actuators--concept and initial trails.

    Science.gov (United States)

    Baiden, David; Ivlev, Oleg

    2013-06-01

    The purpose of this paper is to present a concept of human-robot-interaction control for robots with compliant pneumatic soft-actuators which are directly attached to the human body. Backdrivability of this type of actuators is beneficial for comfort and safety and they are well suitable to design rehabilitation robots for training of activities of daily living (ADL). The concept is verified with an application example of sit-to-stand tasks taking conventional treatment in neurology as reference. The focus is on stroke patients with a target group suffering from hemiplegia and paralysis in one half of the body. A 2 DOF exoskeleton robot was used as testbed to implement the control concept for supporting rising based on a master-slave position control such that movements from the fit leg are transferred to the affected leg. Furthermore the wearer of the robot has the possibility to adjust support for stabilizing the knee joint manually. Preliminary results are presented.

  1. A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033

    Science.gov (United States)

    Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.

    2012-01-01

    The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars

  2. Digging into data using new collaborative infrastructures supporting humanities-based computer science research

    OpenAIRE

    2011-01-01

    This paper explores infrastructure supporting humanities–computer science research in large–scale image data by asking: Why is collaboration a requirement for work within digital humanities projects? What is required for fruitful interdisciplinary collaboration? What are the technical and intellectual approaches to constructing such an infrastructure? What are the challenges associated with digital humanities collaborative work? We reveal that digital humanities collaboration requ...

  3. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  4. Integrated Robot-Human Control in Mining Operations

    Energy Technology Data Exchange (ETDEWEB)

    George Danko

    2007-09-30

    This report contains a detailed description of the work conducted for the project on Integrated Robot-Human Control in Mining Operations at University of Nevada, Reno. This project combines human operator control with robotic control concepts to create a hybrid control architecture, in which the strengths of each control method are combined to increase machine efficiency and reduce operator fatigue. The kinematics reconfiguration type differential control of the excavator implemented with a variety of 'software machine kinematics' is the key feature of the project. This software re-configured excavator is more desirable to execute a given digging task. The human operator retains the master control of the main motion parameters, while the computer coordinates the repetitive movement patterns of the machine links. These repetitive movements may be selected from a pre-defined family of trajectories with different transformations. The operator can make adjustments to this pattern in real time, as needed, to accommodate rapidly-changing environmental conditions. A working prototype has been developed using a Bobcat 435 excavator. The machine is operational with or without the computer control system depending on whether the computer interface is on or off. In preparation for emulated mining tasks tests, typical, repetitive tool trajectories during surface mining operations were recorded at the Newmont Mining Corporation's 'Lone Tree' mine in Nevada. Analysis of these working trajectories has been completed. The motion patterns, when transformed into a family of curves, may serve as the basis for software-controlled machine kinematics transformation in the new human-robot control system. A Cartesian control example has been developed and tested both in simulation and on the experimental excavator. Open-loop control is robustly stable and free of short-term dynamic problems, but it allows for drifting away from the desired motion kinematics of the

  5. Collaborative human-machine nuclear non-proliferation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Greitzer, F.L.; Badalamente, R.V.; Stewart, T.S.

    1993-10-01

    The purpose of this paper is to report on the results of a project investigating support concepts for the information treatment needs of the International Atomic Energy Agency (IAEA, also referred to as the Agency) and its attempts to strengthen international safeguards. The aim of the research was to define user/computer interface concepts and intelligent support features that will enhance the analyst`s access to voluminous and diverse information, the ability to recognize and evaluate uncertain data, and the capability to make decisions and recommendations. The objective was to explore techniques for enhancing safeguards analysis through application of (1) more effective user-computer interface designs and (2) advanced concepts involving human/system collaboration. The approach was to identify opportunities for human/system collaboration that would capitalize on human strengths and still accommodate human limitations. This paper documents the findings and describes a concept prototype, Proliferation Analysis Support System (PASS), developed for demonstration purposes. The research complements current and future efforts to enhance the information systems used by the IAEA, but has application elsewhere, as well.

  6. Collaborative human-machine nuclear non-proliferation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Greitzer, F.L.; Badalamente, R.V.; Stewart, T.S.

    1993-10-01

    The purpose of this paper is to report on the results of a project investigating support concepts for the information treatment needs of the International Atomic Energy Agency (IAEA, also referred to as the Agency) and its attempts to strengthen international safeguards. The aim of the research was to define user/computer interface concepts and intelligent support features that will enhance the analyst`s access to voluminous and diverse information, the ability to recognize and evaluate uncertain data, and the capability to make decisions and recommendations. The objective was to explore techniques for enhancing safeguards analysis through application of (1) more effective user-computer interface designs and (2) advanced concepts involving human/system collaboration. The approach was to identify opportunities for human/system collaboration that would capitalize on human strengths and still accommodate human limitations. This paper documents the findings and describes a concept prototype, Proliferation Analysis Support System (PASS), developed for demonstration purposes. The research complements current and future efforts to enhance the information systems used by the IAEA, but has application elsewhere, as well.

  7. Model of Competencies for Decomposition of Human Behavior: Application to Control System of Robots

    Directory of Open Access Journals (Sweden)

    Jose Vicente Berna-Martinez

    2013-02-01

    Full Text Available Humans and machines have shared the same physical space for many years. To share the same space, we want the robots to behave like human beings. This will facilitate their social integration, their interaction with humans and create an intelligent behavior. To achieve this goal, we need to understand how human behavior is generated, analyze tasks running our nerves and how they relate to them. Then and only then can we implement these mechanisms in robotic beings. In this study, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this study has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.

  8. Integrated Human-Robotic Missions to the Moon and Mars: Mission Operations Design Implications

    Science.gov (United States)

    Mishkin, Andrew; Lee, Young; Korth, David; LeBlanc, Troy

    2007-01-01

    For most of the history of space exploration, human and robotic programs have been independent, and have responded to distinct requirements. The NASA Vision for Space Exploration calls for the return of humans to the Moon, and the eventual human exploration of Mars; the complexity of this range of missions will require an unprecedented use of automation and robotics in support of human crews. The challenges of human Mars missions, including roundtrip communications time delays of 6 to 40 minutes, interplanetary transit times of many months, and the need to manage lifecycle costs, will require the evolution of a new mission operations paradigm far less dependent on real-time monitoring and response by an Earthbound operations team. Robotic systems and automation will augment human capability, increase human safety by providing means to perform many tasks without requiring immediate human presence, and enable the transfer of traditional mission control tasks from the ground to crews. Developing and validating the new paradigm and its associated infrastructure may place requirements on operations design for nearer-term lunar missions. The authors, representing both the human and robotic mission operations communities, assess human lunar and Mars mission challenges, and consider how human-robot operations may be integrated to enable efficient joint operations, with the eventual emergence of a unified exploration operations culture.

  9. The efficacy of using human myoelectric signals to control the limbs of robots in space

    Science.gov (United States)

    Clark, Jane E.; Phillips, Sally J.

    1988-01-01

    This project was designed to investigate the usefulness of the myoelectric signal as a control in robotics applications. More specifically, the neural patterns associated with human arm and hand actions were studied to determine the efficacy of using these myoelectric signals to control the manipulator arm of a robot. The advantage of this approach to robotic control was the use of well-defined and well-practiced neural patterns already available to the system, as opposed to requiring the human operator to learn new tasks and establish new neural patterns in learning to control a joystick or mechanical coupling device.

  10. Cooperation between humans and robots in fine assembly

    Science.gov (United States)

    Jalba, C. K.; Konold, P.; Rapp, I.; Mann, C.; Muminovic, A.

    2017-01-01

    The development of ever smaller components in manufacturing processes require handling, assembling and testing of miniature similar components. The human eye meets its optical limits with ongoing miniaturization of parts, due to the fact that it is not able to detect particles with a size smaller than 0.11 mm or register distances below 0.07 mm - like separating gaps. After several hours of labour, workers cannot accurately differentiate colour nuances as well as constant quality of work cannot be guaranteed. Assembly is usually done with tools, such as microscopes, magnifiers or digital measuring devices. Due to the enormous mental concentration, quickly a fatigue process sets in. This requires breaks or change of task and reduces productivity. Dealing with handling devices such as grippers, guide units and actuators for component assembling, requires a time consuming training process. Often productivity increase is first achieved after years of daily training. Miniaturizations are ubiquitously needed, for instance in the surgery. Very small add-on instruments must be provided. In measurement, e.g. it is a technological must and a competitive advantage, to determine required data with a small-as-possible, highest-possible-resolution sensor. Solution: The realization of a flexible universal workstation, using standard robotic systems and image processing devices in cooperation with humans, where workers are largely freed up from highly strenuous physical and fine motoric work, so that they can do productive work monitoring and adjusting the machine assisted production process.

  11. Lightening the Load of a USMC Rifle Platoon Through Robotics Integration

    Science.gov (United States)

    2014-06-01

    visual aides to the human operators, but to provide audio cues as well, making the robots interact on a more human-like level (Haas and van Erp 2010...This study investigates robots that can literally walk, run, and crawl next to the Marine, as well as respond to verbal commands or visual signals as a...none of them work side- by-side with their human operators. The collaboration or teamwork of the human-robot interaction is critical, especially on

  12. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  13. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction

    Science.gov (United States)

    2011-10-01

    display a currently valid OMB control number. 1. REPORT DATE OCT 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND... validating metrics for the evaluation of a wide spectrum of human-robot interactions (HRI) issues (Steinfeld et al., 2006); designing human-robot...almost exclusively via subjective response, measured one time after a specific interaction. However, physiological indicators, such as oxytocin -related

  14. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  15. Dynamic Spatial Hearing by Human and Robot Listeners

    Science.gov (United States)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  16. A meta-analysis of factors affecting trust in human-robot interaction.

    Science.gov (United States)

    Hancock, Peter A; Billings, Deborah R; Schaefer, Kristin E; Chen, Jessie Y C; de Visser, Ewart J; Parasuraman, Raja

    2011-10-01

    We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.

  17. Human-inspired lighting the intention control in robot systems with of glare avoidance%Human-inspired lighting the intention control in robot systems with of glare avoidance

    Institute of Scientific and Technical Information of China (English)

    Chen Shengyong; Guan Qiu; Liu Sheng; Bi Dexue

    2011-01-01

    This paper presents some human-inspired strategies for lighting control in a robot system for best scene interpretation, where the main intention is to avoid possible glares or highlights occurring in images. It firstly compares the characteristics of human eyes and robot eyes. Then some evaluation criteria are addressed to assess the lighting conditions. A bio-inspired method is adopted to avoid the visual glare which is caused by either direct illumination from large light sources or indirect illumination reflected by smooth surfaces. Appropriate methods are proposed to optimize the pose and optical parameters of the light source and the vision camera.

  18. Collaboration.

    Science.gov (United States)

    McDonald, Meme; Pryor, Boori Monty

    2000-01-01

    Describes, in the words of two Australian authors (one Aboriginal and one European-Australian), how they work together when they write books together, and how their collaboration goes beyond the two of them. (SR)

  19. NASA Human Research Wiki - An Online Collaboration Tool

    Science.gov (United States)

    Barr, Y. R.; Rasbury, J.; Johnson, J.; Barsten, K.; Saile, L.; Watkins, S. D.

    2011-01-01

    In preparation for exploration-class missions, the Exploration Medical Capability (ExMC) element of NASA's Human Research Program (HRP) has compiled a large evidence base, which previously was available only to persons within the NASA community. The evidence base is comprised of several types of data, for example: information on more than 80 medical conditions which could occur during space flight, derived from several sources (including data on incidence and potential outcomes of these medical conditions, as captured in the Integrated Medical Model's Clinical Finding Forms). In addition, approximately 35 gap reports are included in the evidence base, identifying current understanding of the medical challenges for exploration, as well as any gaps in knowledge and/or technology that would need to be addressed in order to provide adequate medical support for these novel missions. In an effort to make the ExMC information available to the general public and increase collaboration with subject matter experts within and outside of NASA, ExMC has developed an online collaboration tool, very similar to a wiki, titled the NASA Human Research Wiki. The platform chosen for this data sharing, and the potential collaboration it could generate, is a MediaWiki-based application that would house the evidence, allow "read only" access to all visitors to the website, and editorial access to credentialed subject matter experts who have been approved by the Wiki's editorial board. Although traditional wikis allow users to edit information in real time, the NASA Human Research Wiki includes a peer review process to ensure quality and validity of information. The wiki is also intended to be a pathfinder project for other HRP elements that may want to use this type of web-based tool. The wiki website will be released with a subset of the data described and will continue to be populated throughout the year.

  20. Are You Talking to Me? Dialogue Systems Supporting Mixed Teams of Humans and Robots

    Science.gov (United States)

    Dowding, John; Clancey, William J.; Graham, Jeffrey

    2006-01-01

    This position paper describes an approach to building spoken dialogue systems for environments containing multiple human speakers and hearers, and multiple robotic speakers and hearers. We address the issue, for robotic hearers, of whether the speech they hear is intended for them, or more likely to be intended for some other hearer. We will describe data collected during a series of experiments involving teams of multiple human and robots (and other software participants), and some preliminary results for distinguishing robot-directed speech from human-directed speech. The domain of these experiments is Mars-analogue planetary exploration. These Mars-analogue field studies involve two subjects in simulated planetary space suits doing geological exploration with the help of 1-2 robots, supporting software agents, a habitat communicator and links to a remote science team. The two subjects are performing a task (geological exploration) which requires them to speak with each other while also speaking with their assistants. The technique used here is to use a probabilistic context-free grammar language model in the speech recognizer that is trained on prior robot-directed speech. Intuitively, the recognizer will give higher confidence to an utterance if it is similar to utterances that have been directed to the robot in the past.

  1. Multisensor-based human detection and tracking for mobile service robots.

    Science.gov (United States)

    Bellotto, Nicola; Hu, Huosheng

    2009-02-01

    One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.

  2. Automatic Navigation for Rat-Robots with Modeling of the Human Guidance

    Institute of Scientific and Technical Information of China (English)

    Chao Sun; Nenggan Zheng; Xinlu Zhang; Weidong Chen; Xiaoxiang Zheng

    2013-01-01

    A bio-robot system refers to an animal equipped with Brain-Computer Interface (BCI),through which the outer stimulation is delivered directly into the animal's brain to control its behaviors.The development ofbio-robots suffers from the dependency on real-time guidance by human operators.Because of its inherent difficulties,there is no feasible method for automatic controlling of bio-robots yet.In this paper,we propose a new method to realize the automatic navigation for bio-robots.A General Regression Neural Network (GRNN) is adopted to analyze and model the controlling procedure of human operations.Comparing to the traditional approaches with explicit controlling rules,our algorithm learns the controlling process and imitates the decision-making of human-beings to steer the rat-robot automatically.In real-time navigation experiments,our method successfully controls bio-robots to follow given paths automatically and precisely.This work would be significant for future applications of bio-robots and provide a new way to realize hybrid intelligent systems with artificial intelligence and natural biological intelligence combined together.

  3. Passivity-based control of robotic manipulators for safe cooperation with humans

    Science.gov (United States)

    Zanchettin, Andrea Maria; Lacevic, Bakir; Rocco, Paolo

    2015-02-01

    This paper presents a novel approach to the control of articulated robots in unstructured environments. The proposed control ensures several properties. First, the controller guarantees the achievement of a goal position without getting stuck in local minima. Then, the controller makes the closed-loop system passive, which renders the approach attractive for applications where the robot needs to safely interact with humans. Finally, the control law is explicitly shaped by the safety measure - the danger field. The proposed control law has been implemented and validated in a realistic experimental scenario, demonstrating the effectiveness in driving the robot to a given configuration in a cluttered environment, without any offline planning phase. Furthermore, the passivity of the system enables the robot to easily accommodate external forces on the tool, when a physical contact between the robot and the environment is established.

  4. 弧焊机器人工作站协同作业研究%Research on Collaboration of Arc-welding Robot Workstation

    Institute of Scientific and Technical Information of China (English)

    陈美宏; 焦恩璋

    2011-01-01

    Taking coupling planning, the collaborative paths of are-welding robot/positioner workstation and multi-robot welding workstation were planned. By use of self-developed collaborative path planners, the drive functions for simulation models of arc-welding robot workstations were built. Collaborative simulations of saddle-shaped weld seam and twist weld seam were accomplished. The simu lation results show the method of collaborative path planning is correct and feasible.%采用耦合规划方法,对弧焊机器人/变位机工作站和多机器人焊接工作站的协同作业路径进行规划.利用自主开发的协调路径规划器,构建弧焊机器人工作站仿真模型的驱动函数,完成了马鞍型焊缝和螺旋线焊缝的协同作业仿真实验,验证了协同作业路径规划方法的正确性和可行性.

  5. Metaphors to Drive By: Exploring New Ways to Guide Human-Robot Interaction

    Energy Technology Data Exchange (ETDEWEB)

    David J. Bruemmer; David I. Gertman; Curtis W. Nielsen

    2007-08-01

    Autonomous behaviors created by the research and development community are not being extensively utilized within energy, defense, security, or industrial contexts. This paper provides evidence that the interaction methods used alongside these behaviors may not provide a mental model that can be easily adopted or used by operators. Although autonomy has the potential to reduce overall workload, the use of robot behaviors often increased the complexity of the underlying interaction metaphor. This paper reports our development of new metaphors that support increased robot complexity without passing the complexity of the interaction onto the operator. Furthermore, we illustrate how recognition of problems in human-robot interactions can drive the creation of new metaphors for design and how human factors lessons in usability, human performance, and our social contract with technology have the potential for enormous payoff in terms of establishing effective, user-friendly robot systems when appropriate metaphors are used.

  6. Codevelopmental learning between human and humanoid robot using a dynamic neural-network model.

    Science.gov (United States)

    Tani, Jun; Nishimoto, Ryu; Namikawa, Jun; Ito, Masato

    2008-02-01

    This paper examines characteristics of interactive learning between human tutors and a robot having a dynamic neural-network model, which is inspired by human parietal cortex functions. A humanoid robot, with a recurrent neural network that has a hierarchical structure, learns to manipulate objects. Robots learn tasks in repeated self-trials with the assistance of human interaction, which provides physical guidance until the tasks are mastered and learning is consolidated within the neural networks. Experimental results and the analyses showed the following: 1) codevelopmental shaping of task behaviors stems from interactions between the robot and a tutor; 2) dynamic structures for articulating and sequencing of behavior primitives are self-organized in the hierarchically organized network; and 3) such structures can afford both generalization and context dependency in generating skilled behaviors.

  7. Affective and behavioral responses to robot-initiated social touch : Towards understanding the opportunities and limitations of physical contact in human-robot interaction

    NARCIS (Netherlands)

    Willemse, C.J.A.M.; Toet, A.; Erp, J.B.F. van

    2017-01-01

    Social touch forms an important aspect of the human non-verbal communication repertoire, but is often overlooked in human–robot interaction. In this study, we investigated whether robot-initiated touches can induce physiological, emotional, and behavioral responses similar to those reported for

  8. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Michael Jae-Yoon Chung

    Full Text Available A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i learn probabilistic models of actions through self-discovery and experience, (ii utilize these learned models for inferring the goals of human actions, and (iii perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i a simulated robot that learns human-like gaze following behavior, and (ii a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  9. Human Hand Motion Analysis and Synthesis of Optimal Power Grasps for a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Francesca Cordella

    2014-03-01

    Full Text Available Biologically inspired robotic systems can find important applications in biomedical robotics, since studying and replicating human behaviour can provide new insights into motor recovery, functional substitution and human-robot interaction. The analysis of human hand motion is essential for collecting information about human hand movements useful for generalizing reaching and grasping actions on a robotic system. This paper focuses on the definition and extraction of quantitative indicators for describing optimal hand grasping postures and replicating them on an anthropomorphic robotic hand. A motion analysis has been carried out on six healthy human subjects performing a transverse volar grasp. The extracted indicators point to invariant grasping behaviours between the involved subjects, thus providing some constraints for identifying the optimal grasping configuration. Hence, an optimization algorithm based on the Nelder-Mead simplex method has been developed for determining the optimal grasp configuration of a robotic hand, grounded on the aforementioned constraints. It is characterized by a reduced computational cost. The grasp stability has been tested by introducing a quality index that satisfies the form-closure property. The grasping strategy has been validated by means of simulation tests and experimental trials on an arm-hand robotic system. The obtained results have shown the effectiveness of the extracted indicators to reduce the non-linear optimization problem complexity and lead to the synthesis of a grasping posture able to replicate the human behaviour while ensuring grasp stability. The experimental results have also highlighted the limitations of the adopted robotic platform (mainly due to the mechanical structure to achieve the optimal grasp configuration.

  10. Teaching Human Poses Interactively to a Social Robot

    Directory of Open Access Journals (Sweden)

    Miguel A. Salichs

    2013-09-01

    Full Text Available The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher’s explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics.

  11. Teaching Human Poses Interactively to a Social Robot

    Science.gov (United States)

    Gonzalez-Pacheco, Victor; Malfaz, Maria; Fernandez, Fernando; Salichs, Miguel A.

    2013-01-01

    The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher's explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics. PMID:24048336

  12. Adaptive training algorithm for robot-assisted upper-arm rehabilitation, applicable to individualised and therapeutic human-robot interaction.

    Science.gov (United States)

    Chemuturi, Radhika; Amirabdollahian, Farshid; Dautenhahn, Kerstin

    2013-09-28

    Rehabilitation robotics is progressing towards developing robots that can be used as advanced tools to augment the role of a therapist. These robots are capable of not only offering more frequent and more accessible therapies but also providing new insights into treatment effectiveness based on their ability to measure interaction parameters. A requirement for having more advanced therapies is to identify how robots can 'adapt' to each individual's needs at different stages of recovery. Hence, our research focused on developing an adaptive interface for the GENTLE/A rehabilitation system. The interface was based on a lead-lag performance model utilising the interaction between the human and the robot. The goal of the present study was to test the adaptability of the GENTLE/A system to the performance of the user. Point-to-point movements were executed using the HapticMaster (HM) robotic arm, the main component of the GENTLE/A rehabilitation system. The points were displayed as balls on the screen and some of the points also had a real object, providing a test-bed for the human-robot interaction (HRI) experiment. The HM was operated in various modes to test the adaptability of the GENTLE/A system based on the leading/lagging performance of the user. Thirty-two healthy participants took part in the experiment comprising of a training phase followed by the actual-performance phase. The leading or lagging role of the participant could be used successfully to adjust the duration required by that participant to execute point-to-point movements, in various modes of robot operation and under various conditions. The adaptability of the GENTLE/A system was clearly evident from the durations recorded. The regression results showed that the participants required lower execution times with the help from a real object when compared to just a virtual object. The 'reaching away' movements were longer to execute when compared to the 'returning towards' movements irrespective of the

  13. Collaborative human-machine analysis using a controlled natural language

    Science.gov (United States)

    Mott, David H.; Shemanski, Donald R.; Giammanco, Cheryl; Braines, Dave

    2015-05-01

    A key aspect of an analyst's task in providing relevant information from data is the reasoning about the implications of that data, in order to build a picture of the real world situation. This requires human cognition, based upon domain knowledge about individuals, events and environmental conditions. For a computer system to collaborate with an analyst, it must be capable of following a similar reasoning process to that of the analyst. We describe ITA Controlled English (CE), a subset of English to represent analyst's domain knowledge and reasoning, in a form that it is understandable by both analyst and machine. CE can be used to express domain rules, background data, assumptions and inferred conclusions, thus supporting human-machine interaction. A CE reasoning and modeling system can perform inferences from the data and provide the user with conclusions together with their rationale. We present a logical problem called the "Analysis Game", used for training analysts, which presents "analytic pitfalls" inherent in many problems. We explore an iterative approach to its representation in CE, where a person can develop an understanding of the problem solution by incremental construction of relevant concepts and rules. We discuss how such interactions might occur, and propose that such techniques could lead to better collaborative tools to assist the analyst and avoid the "pitfalls".

  14. Can robots patch-clamp as well as humans? Characterization of a novel sodium channel mutation.

    Science.gov (United States)

    Estacion, M; Choi, J S; Eastman, E M; Lin, Z; Li, Y; Tyrrell, L; Yang, Y; Dib-Hajj, S D; Waxman, S G

    2010-06-01

    Ion channel missense mutations cause disorders of excitability by changing channel biophysical properties. As an increasing number of new naturally occurring mutations have been identified, and the number of other mutations produced by molecular approaches such as in situ mutagenesis has increased, the need for functional analysis by patch-clamp has become rate limiting. Here we compare a patch-clamp robot using planar-chip technology with human patch-clamp in a functional assessment of a previously undescribed Nav1.7 sodium channel mutation, S211P, which causes erythromelalgia. This robotic patch-clamp device can increase throughput (the number of cells analysed per day) by 3- to 10-fold. Both modes of analysis show that the mutation hyperpolarizes activation voltage dependence (8 mV by manual profiling, 11 mV by robotic profiling), alters steady-state fast inactivation so that it requires an additional Boltzmann function for a second fraction of total current (approximately 20% manual, approximately 40% robotic), and enhances slow inactivation (hyperpolarizing shift--15 mV by human,--13 mV robotic). Manual patch-clamping demonstrated slower deactivation and enhanced (approximately 2-fold) ramp response for the mutant channel while robotic recording did not, possibly due to increased temperature and reduced signal-to-noise ratio on the robotic platform. If robotic profiling is used to screen ion channel mutations, we recommend that each measurement or protocol be validated by initial comparison to manual recording. With this caveat, we suggest that, if results are interpreted cautiously, robotic patch-clamp can be used with supervision and subsequent confirmation from human physiologists to facilitate the initial profiling of a variety of electrophysiological parameters of ion channel mutations.

  15. CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Arendt, Dustin L.; Komurlu, Caner; Blaha, Leslie M.

    2017-07-14

    We developed CHISSL, a human-machine interface that utilizes supervised machine learning in an unsupervised context to help the user group unlabeled instances by her own mental model. The user primarily interacts via correction (moving a misplaced instance into its correct group) or confirmation (accepting that an instance is placed in its correct group). Concurrent with the user's interactions, CHISSL trains a classification model guided by the user's grouping of the data. It then predicts the group of unlabeled instances and arranges some of these alongside the instances manually organized by the user. We hypothesize that this mode of human and machine collaboration is more effective than Active Learning, wherein the machine decides for itself which instances should be labeled by the user. We found supporting evidence for this hypothesis in a pilot study where we applied CHISSL to organize a collection of handwritten digits.

  16. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures.

    Directory of Open Access Journals (Sweden)

    Thierry Chaminade

    Full Text Available BACKGROUND: The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. METHODOLOGY: Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. PRINCIPAL FINDINGS: Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. CONCLUSIONS: Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. SIGNIFICANCE: Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.

  17. Classifying a Person's Degree of Accessibility From Natural Body Language During Social Human-Robot Interactions.

    Science.gov (United States)

    McColl, Derek; Jiang, Chuan; Nejat, Goldie

    2017-02-01

    For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.

  18. Robotics

    Science.gov (United States)

    Rothschild, Lynn J.

    2012-01-01

    Earth's upper atmosphere is an extreme environment: dry, cold, and irradiated. It is unknown whether our aerobiosphere is limited to the transport of life, or there exist organisms that grow and reproduce while airborne (aerophiles); the microenvironments of suspended particles may harbor life at otherwise uninhabited altitudes[2]. The existence of aerophiles would significantly expand the range of planets considered candidates for life by, for example, including the cooler clouds of a hot Venus-like planet. The X project is an effort to engineer a robotic exploration and biosampling payload for a comprehensive survey of Earth's aerobiology. While many one-shot samples have been retrieved from above 15 km, their results are primarily qualitative; variations in method confound comparisons, leaving such major gaps in our knowledge of aerobiology as quantification of populations at different strata and relative species counts[1]. These challenges and X's preliminary solutions are explicated below. X's primary balloon payload is undergoing a series of calibrations before beginning flights in Spring 2012. A suborbital launch is currently planned for Summer 2012. A series of ground samples taken in Winter 2011 is being used to establish baseline counts and identify likely background contaminants.

  19. Educational Robotics as Mindtools

    Science.gov (United States)

    Mikropoulos, Tassos A.; Bellou, Ioanna

    2013-01-01

    Although there are many studies on the constructionist use of educational robotics, they have certain limitations. Some of them refer to robotics education, rather than educational robotics. Others follow a constructionist approach, but give emphasis only to design skills, creativity and collaboration. Some studies use robotics as an educational…

  20. A Meta-Analysis of Factors Influencing the Development of Human-Robot Trust

    Science.gov (United States)

    2011-12-01

    Li, D. Effects of Communication Style and Culture on Ability to Accept Recommendations From Robots. Computers in Human Behavior 2009, 25 (2), 587...71-81. Caporeal, L. R. (1986). Anthropomorphism and mechnomorphism: Two faces of the human machine. Computers in Human Behavior , 2, 215-234

  1. A Software Framework for Coordinating Human-Robot Teams Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Robots are expected to fulfill an important role in manned exploration operations. They can reduce the risk of crew EVA and improve crew productivity on routine...

  2. A Software Framework for Coordinating Human-Robot Teams Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Robots are expected to fulfill an important role in manned exploration operations. They will perform precursor missions to pre-position resources for manned...

  3. Using mixed-initiative human-robot interaction to bound performance in a search task

    Energy Technology Data Exchange (ETDEWEB)

    Curtis W. Nielsen; Douglas A. Few; Devin S. Athey

    2008-12-01

    Mobile robots are increasingly used in dangerous domains, because they can keep humans out of harm’s way. Despite their advantages in hazardous environments, their general acceptance in other less dangerous domains has not been apparent and, even in dangerous environments, robots are often viewed as a “last-possible choice.” In order to increase the utility and acceptance of robots in hazardous domains researchers at the Idaho National Laboratory have both developed and tested novel mixed-initiative solutions that support the human-robot interactions. In a recent “dirty-bomb” experiment, participants exhibited different search strategies making it difficult to determine any performance benefits. This paper presents a method for categorizing the search patterns and shows that the mixed-initiative solution decreased the time to complete the task and decreased the performance spread between participants independent of prior training and of individual strategies used to accomplish the task.

  4. 构建以“机器人”为载体的机电工程实践平台培养大学生协同创新研究能力%Constructing mechatronic engineering practical platform based on robot to culture ability of collaborative innovation research of college students

    Institute of Scientific and Technical Information of China (English)

    骆德渊; 秦东兴; 黄洪钟

    2013-01-01

    Multidisciplinary such as mechanics ,electronics ,software , control , etc . is involved in robotic technology ,which is the suitable platform to culture the ability of collaborative innovation of college students . Through integrating all kind of high quality mechatronic experimental teaching resources , the platform to culture the ability of collaborative innovation of college students based on robot is built ,and the construction of system to culture the ability of collaborative innovation of college students based on robot is realized ,and the new mode to culture the ability of collaborative innovation of college students is explored .The target of all of these is to provide human resources of collaborative innovation for construction of innovative country .%机器人技术涉及机械、电子、软件及控制等多个学科,是培养大学生协同创新研究能力的理想平台。通过整合校内外各种优质机电实验教学资源,搭建起以机器人为载体的协同创新人才培养平台、构建以机器人为载体的协同创新人才培养体系、探索协同创新人才培养新模式,为建设创新型国家提供协同创新人力资源支持。

  5. Human Hand Motion Analysis and Synthesis of Optimal Power Grasps for a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Francesca Cordella

    2014-03-01

    and experimental trials on an arm-hand robotic system. The obtained results have shown the effectiveness of the extracted indicators to reduce the non-linear optimization problem complexity and lead to the synthesis of a grasping posture able to replicate the human behaviour while ensuring grasp stability. The experimental results have also highlighted the limitations of the adopted robotic platform (mainly due to the mechanical structure to achieve the optimal grasp configuration.

  6. Development of Human-Tracking Robot by Using QR Code Recognition

    Science.gov (United States)

    Eimon, Koki; Anezaki, Takashi; Tansuriyavong, Suriyon; Yagi, Yasushi

    In this paper, we propose a human-tracking robot that can be used in some commercial establishments, such as airports and factories. Human-tracking process involves four main steps. The first step involves robust personal identification by using QR code recognition. It is the most important step in human tracking. The second step is location detection by shape-based pattern matching in order to determine the position of the QR code when the human being tracked moves far from the robot. The third step involves auxiliary re-detection by using IR cameras and retroreflectors in case that local detection is difficult in the second step. The fourth step is robot control to maintain the correct distance for human tracking. In a measurement experiment for rate of QR code recognition, it was shown that the rate of QR code recognition was 99.9% and that location detection is robust. In a robot-control experiment, it was shown that the tracking is accurate. During tracking, the robot maintains an appropriate distance from the human.

  7. Three-dimensional computer-aided human factors engineering analysis of a grafting robot.

    Science.gov (United States)

    Chiu, Y C; Chen, S; Wu, G J; Lin, Y H

    2012-07-01

    The objective of this research was to conduct a human factors engineering analysis of a grafting robot design using computer-aided 3D simulation technology. A prototype tubing-type grafting robot for fruits and vegetables was the subject of a series of case studies. To facilitate the incorporation of human models into the operating environment of the grafting robot, I-DEAS graphic software was applied to establish individual models of the grafting robot in line with Jack ergonomic analysis. Six human models (95th percentile, 50th percentile, and 5th percentile by height for both males and females) were employed to simulate the operating conditions and working postures in a real operating environment. The lower back and upper limb stresses of the operators were analyzed using the lower back analysis (LBA) and rapid upper limb assessment (RULA) functions in Jack. The experimental results showed that if a leg space is introduced under the robot, the operator can sit closer to the robot, which reduces the operator's level of lower back and upper limbs stress. The proper environmental layout for Taiwanese operators for minimum levels of lower back and upper limb stress are to set the grafting operation at 23.2 cm away from the operator at a height of 85 cm and with 45 cm between the rootstock and scion units.

  8. A New Dynamic Edge Detection toward Better Human-Robot Interaction

    Science.gov (United States)

    Hafiz, Abdul Rahman; Alnajjar, Fady; Murase, Kazuyuki

    Robot’s vision plays a significant role in human-robot interaction, e.g., face recognition, expression understanding, motion tracking, etc. Building a strong vision system for the robot, therefore, is one of the fundamental issues behind the success of such an interaction. Edge detection, which is known as the basic units for measuring the strength of any vision system, has recently been taken attention from many groups of robotic researchers. Most of the reported works surrounding this issue have been based on designing a static mask, which sequentially move through the pixels in the image to extract edges. Despite the success of these works, such statically could restrict the model’s performance in some domains. Designing a dynamic mask by the inspiration from the basic principle of “retina”, and which supported by a unique distribution of photoreceptor, therefore, could overcome this problem. A human-like robot (RobovieR-2) has been used to examine the validity of the proposed model. The experimental results show the validity of the model, and it is ability to offer a number of advantages to the robot, such as: accurate edge detection and better attention to the front user, which is a step towards human-robot interaction.

  9. EEG Theta and Mu Oscillations during Perception of Human and Robot Actions

    Directory of Open Access Journals (Sweden)

    Burcu A. Urgen

    2013-11-01

    Full Text Available Perception of others’ actions supports important social skills, such as communication, intention understanding, and empathy. Are mechanisms of action processing in human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans so they can be used as stimuli to address such questions. Here, we recorded EEG during the observation of human and robot actions. Sensorimotor mu (8-13 Hz rhythm has been linked to the motor simulation aspect of action processing (and to human mirror neuron system, MNS and frontal theta (4-8 Hz rhythm to semantic and memory-related aspects. We explored whether these measures exhibit selectivity for biological entities: for whether the motion and/or the visual appearance of the observed agent is biological. Participants watched videos of three agents performing the same actions. The first was a Human, and had biological motion and appearance. The other two were a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical motion and appearance. Observation of all agents induced significant attenuation in the power of mu oscillations that was equivalent for all agents. Thus, mu suppression, considered an index of the activity of the MNS, did not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta activity thus appears to be sensitive to visual appearance, suggesting artificial agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience thus can allow us to explore functional properties of action processing on the one hand, and help inform the design of social robots on

  10. Dragons, Ladybugs, and Softballs: Girls' STEM Engagement with Human-Centered Robotics

    Science.gov (United States)

    Gomoll, Andrea; Hmelo-Silver, Cindy E.; Šabanović, Selma; Francisco, Matthew

    2016-12-01

    Early experiences in science, technology, engineering, and math (STEM) are important for getting youth interested in STEM fields, particularly for girls. Here, we explore how an after-school robotics club can provide informal STEM experiences that inspire students to engage with STEM in the future. Human-centered robotics, with its emphasis on the social aspects of science and technology, may be especially important for bringing girls into the STEM pipeline. Using a problem-based approach, we designed two robotics challenges. We focus here on the more extended second challenge, in which participants were asked to imagine and build a telepresence robot that would allow others to explore their space from a distance. This research follows four girls as they engage with human-centered telepresence robotics design. We constructed case studies of these target participants to explore their different forms of engagement and phases of interest development—considering facets of behavioral, social, cognitive, and conceptual-to-consequential engagement as well as stages of interest ranging from triggered interest to well-developed individual interest. The results demonstrated that opportunities to personalize their robots and feedback from peers and facilitators were important motivators. We found both explicit and vicarious engagement and varied interest phases in our group of four focus participants. This first iteration of our project demonstrated that human-centered robotics is a promising approach to getting girls interested and engaged in STEM practices. As we design future iterations of our robotics club environment, we must consider how to harness multiple forms of leadership and engagement without marginalizing students with different working preferences.

  11. Dragons, Ladybugs, and Softballs: Girls' STEM Engagement with Human-Centered Robotics

    Science.gov (United States)

    Gomoll, Andrea; Hmelo-Silver, Cindy E.; Šabanović, Selma; Francisco, Matthew

    2016-08-01

    Early experiences in science, technology, engineering, and math (STEM) are important for getting youth interested in STEM fields, particularly for girls. Here, we explore how an after-school robotics club can provide informal STEM experiences that inspire students to engage with STEM in the future. Human-centered robotics, with its emphasis on the social aspects of science and technology, may be especially important for bringing girls into the STEM pipeline. Using a problem-based approach, we designed two robotics challenges. We focus here on the more extended second challenge, in which participants were asked to imagine and build a telepresence robot that would allow others to explore their space from a distance. This research follows four girls as they engage with human-centered telepresence robotics design. We constructed case studies of these target participants to explore their different forms of engagement and phases of interest development—considering facets of behavioral, social, cognitive, and conceptual-to-consequential engagement as well as stages of interest ranging from triggered interest to well-developed individual interest. The results demonstrated that opportunities to personalize their robots and feedback from peers and facilitators were important motivators. We found both explicit and vicarious engagement and varied interest phases in our group of four focus participants. This first iteration of our project demonstrated that human-centered robotics is a promising approach to getting girls interested and engaged in STEM practices. As we design future iterations of our robotics club environment, we must consider how to harness multiple forms of leadership and engagement without marginalizing students with different working preferences.

  12. Human Factors Principles in Design of Computer-Mediated Visualization for Robot Missions

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; David J Bruemmer

    2008-12-01

    With increased use of robots as a resource in missions supporting countermine, improvised explosive devices (IEDs), and chemical, biological, radiological nuclear and conventional explosives (CBRNE), fully understanding the best means by which to complement the human operator’s underlying perceptual and cognitive processes could not be more important. Consistent with control and display integration practices in many other high technology computer-supported applications, current robotic design practices rely highly upon static guidelines and design heuristics that reflect the expertise and experience of the individual designer. In order to use what we know about human factors (HF) to drive human robot interaction (HRI) design, this paper reviews underlying human perception and cognition principles and shows how they were applied to a threat detection domain.

  13. Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)

    Science.gov (United States)

    Baroff, Lynn; Dischinger, Charlie; Fitts, David

    2009-01-01

    Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.

  14. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann;

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance i...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  15. Fuzzy variable impedance control based on stiffness identification for human-robot cooperation

    Science.gov (United States)

    Mao, Dachao; Yang, Wenlong; Du, Zhijiang

    2017-06-01

    This paper presents a dynamic fuzzy variable impedance control algorithm for human-robot cooperation. In order to estimate the intention of human for co-manipulation, a fuzzy inference system is set up to adjust the impedance parameter. Aiming at regulating the output fuzzy universe based on the human arm’s stiffness, an online stiffness identification method is developed. A drag interaction task is conducted on a 5-DOF robot with variable impedance control. Experimental results demonstrate that the proposed algorithm is superior.

  16. Physiological and subjective evaluation of a human-robot object hand-over task.

    Science.gov (United States)

    Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël

    2011-11-01

    In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.

  17. Toward Integrated Soccer Robots

    OpenAIRE

    Shen, Wei-Min; Adibi, Jafar; Adobbati, Rogelio; Cho, Bonghan; Erdem, Ali; Moradi, Hadi; Salemi, Behnam; Tejada, Sheila

    1998-01-01

    Robot soccer competition provides an excellent opportunity for integrated robotics research. In particular, robot players in a soccer game must recognize and track objects in real time, navigate in a dynamic field, collaborate with teammates, and strike the ball in the correct direction. All these tasks demand robots that are autonomous (sensing, thinking, and acting as independent creatures), efficient (functioning under time and resource constraints), cooperative (collaborating with each ot...

  18. Social Intelligence in a Human-Machine Collaboration System

    Science.gov (United States)

    Nakajima, Hiroshi; Morishima, Yasunori; Yamada, Ryota; Brave, Scott; Maldonado, Heidy; Nass, Clifford; Kawaji, Shigeyasu

    In this information society of today, it is often argued that it is necessary to create a new way of human-machine interaction. In this paper, an agent with social response capabilities has been developed to achieve this goal. There are two kinds of information that is exchanged by two entities: objective and functional information (e.g., facts, requests, states of matters, etc.) and subjective information (e.g., feelings, sense of relationship, etc.). Traditional interactive systems have been designed to handle the former kind of information. In contrast, in this study social agents handling the latter type of information are presented. The current study focuses on sociality of the agent from the view point of Media Equation theory. This article discusses the definition, importance, and benefits of social intelligence as agent technology and argues that social intelligence has a potential to enhance the user's perception of the system, which in turn can lead to improvements of the system's performance. In order to implement social intelligence in the agent, a mind model has been developed to render affective expressions and personality of the agent. The mind model has been implemented in a human-machine collaborative learning system. One differentiating feature of the collaborative learning system is that it has an agent that performs as a co-learner with which the user interacts during the learning session. The mind model controls the social behaviors of the agent, thus making it possible for the user to have more social interactions with the agent. The experiment with the system suggested that a greater degree of learning was achieved when the students worked with the co-learner agent and that the co-learner agent with the mind model that expressed emotions resulted in a more positive attitude toward the system.

  19. The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction

    Science.gov (United States)

    2009-03-01

    robot delivering snacks must have a wide range of mobility. The buildings are large, ranging between 4 and 8 floors . About 1000 people work or visit...service in our office environment, by reducing the distracting noise, and ensuring operation over long periods of time and a variety of floor types...descriptions of service jobs such as a waiter or waitress , or general Sci-Fi characters such those from the Jetsons. Based on these responses, we felt

  20. Using Human Gestures and Generic Skills to Instruct a Mobile Robot Arm in a Feeder Filling Scenario

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Rath; Høilund, Carsten; Krüger, Volker

    2012-01-01

    Mobile robots that have the ability to cooperate with humans are able to provide new possibilities to manufac- turing industries. In this paper, we discuss our mobile robot arm that a) can provide assistance at different locations in a factory and b) that can be programmed using complex human...... actions such as pointing in Take this object. In this paper, we discuss the use of the mobile robot for a feeding scenario where a human operator specifies the parts and the feeders through pointing gestures. The system is partially built using generic robotic skills. Through extensive experiments, we...

  1. Using Human Gestures and Generic Skills to Instruct a Mobile Robot Arm in a Feeder Filling Scenario

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Rath; Høilund, Carsten; Krüger, Volker

    2012-01-01

    Mobile robots that have the ability to cooperate with humans are able to provide new possibilities to manufac- turing industries. In this paper, we discuss our mobile robot arm that a) can provide assistance at different locations in a factory and b) that can be programmed using complex human...... actions such as pointing in Take this object. In this paper, we discuss the use of the mobile robot for a feeding scenario where a human operator specifies the parts and the feeders through pointing gestures. The system is partially built using generic robotic skills. Through extensive experiments, we...

  2. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    OpenAIRE

    2013-01-01

    In this paper, we have investigated the concept of "Cultural Robotics" with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. Ac...

  3. I Show You How I Like You: Emotional Human-Robot Interaction through Facial Expression and Tactile Stimulation

    DEFF Research Database (Denmark)

    Canamero, Dolores; Fredslund, Jacob

    2001-01-01

    We report work on a LEGO robot that displays different emotional expressions in response to physical stimulation, for the purpose of social interaction with humans. This is a first step toward our longer-term goal of exploring believable emotional exchanges to achieve plausible interaction...... with a simple robot. Drawing inspiration from theories of human basic emotions, we implemented several prototypical expressions in the robot's caricatured face and conducted experiments to assess the recognizability of these expressions...

  4. Research the Gait Characteristics of Human Walking Based on a Robot Model and Experiment

    Science.gov (United States)

    He, H. J.; Zhang, D. N.; Yin, Z. W.; Shi, J. H.

    2017-02-01

    In order to research the gait characteristics of human walking in different walking ways, a robot model with a single degree of freedom is put up in this paper. The system control models of the robot are established through Matlab/Simulink toolbox. The gait characteristics of straight, uphill, turning, up the stairs, down the stairs up and down areanalyzed by the system control models. To verify the correctness of the theoretical analysis, an experiment was carried out. The comparison between theoretical results and experimental results shows that theoretical results are better agreement with the experimental ones. Analyze the reasons leading to amplitude error and phase error and give the improved methods. The robot model and experimental ways can provide foundation to further research the various gait characteristics of the exoskeleton robot.

  5. Robot Arm Control and Having Meal Aid System with Eye Based Human-Computer Interaction (HCI)

    Science.gov (United States)

    Arai, Kohei; Mardiyanto, Ronny

    Robot arm control and having meal aid system with eye based HCI is proposed. The proposed system allows disabled person to select desirable food from the meal tray by their eyes only. Robot arm which is used for retrieving the desirable food is controlled by human eye. At the tip of the robot arm, tiny camera is equipped. Disabled person wear a glass of which a single Head Mount Display: HMD and tiny camera is mounted so that disabled person can take a look at the desired food and retrieve it by looking at the food displayed onto HMD. Experimental results show that disabled person can retrieve the desired food successfully. It also is confirmed that robot arm control by eye based HCI is much faster than that by hands.

  6. Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies

    Science.gov (United States)

    2006-07-01

    conduction and throat microphones, and tactile systems. 15. SUBJECT TERMS auditory control and display, haptic display, human-robot interface, human...for Tactile Display Design ..............................................................54 3.6.4 Haptic Display Conclusions and Recommendations...phantom robot” reacts to the teleoperator’s commands in real time (Kheddar, Chellali, & Coiffet, 2002). Various techniques such as augmented reality

  7. Robotic Platform for Automated Search and Rescue Missions of Humans

    Directory of Open Access Journals (Sweden)

    Eli Kolberg

    2013-02-01

    Full Text Available We present a novel type of model incorporating a special remote life signals sensing optical system on top of a controllable robotic platform. The remote sensing system consists of a laser and a camera. By properly adapting our optics and by applying a proper image processing algorithm we can sense within the field of view, illuminated by the laser and imaged by the camera, the heartbeats and the blood pulse pressure of subjects (even several simultaneously. The task is to use the developed robotic system for search and rescue mission such as saving survivals from a fire.

  8. In our own image? Emotional and neural processing differences when observing human-human vs human-robot interactions.

    Science.gov (United States)

    Wang, Yin; Quadflieg, Susanne

    2015-11-01

    Notwithstanding the significant role that human-robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human-human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal-parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. © The Author (2015). Published by Oxford University Press.

  9. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  10. Human--machine load sharing in rehabilitation robotics.

    Science.gov (United States)

    Rahman, T; McClenathan, K

    1999-01-01

    A force-assist mechanism has been developed to mount on the Chameleon - a wheelchair mounted rehabilitation robot. The device will amplify the forces applied by the user, making it possible to lift a large weight with a small force. This paper describes the test-bed development; instrumentation of the Chameleon with power assistance; and preliminary results on system efficacy.

  11. Robot Enhancement of Cognitive and Ethical Capabilities of Humans

    NARCIS (Netherlands)

    Fosch Villaronga, Eduard; Kalipalya-Mruthyunjaya, Vishwas; Seibt, Johanna; Norskov, Marco; Andersen, Soren Schack

    2016-01-01

    The aim of this paper is to mold and materialize the future of learning. The paper introduces a Modular Cognitive Educator System (MCES), which aims to help people learn cognitive and ethical capabilities to face one of the indirect impacts of the robot revolution, namely, its impact on the educatio

  12. Long-term human-robot interaction with young users

    NARCIS (Netherlands)

    Baxter, P.; Belpaeme, T.; Canamero, L.; Cosi, P.; Demiris, Y.; Enescu, V.; Et al.

    2011-01-01

    Artificial companion agents have the potential to combine novel means for effective health communication with young patients support and entertainment. However, the theory and practice of long-term child-robot interaction is currently an underdeveloped area of research. This paper introduces an appr

  13. Long-term human-robot interaction with young users

    NARCIS (Netherlands)

    Baxter, P.; Belpaeme, T.; Canamero, L.; Cosi, P.; Demiris, Y.; Enescu, V.; Et al.

    2011-01-01

    Artificial companion agents have the potential to combine novel means for effective health communication with young patients support and entertainment. However, the theory and practice of long-term child-robot interaction is currently an underdeveloped area of research. This paper introduces an appr

  14. Mission Activity Planning for Humans and Robots on the Moon

    Science.gov (United States)

    Weisbin, C.; Shelton, K.; Lincoln, W.; Elfes, A.; Smith, J.H.; Mrozinski, J.; Hua, H.; Adumitroaie, V.; Silberg, R.

    2008-01-01

    A series of studies is conducted to develop a systematic approach to optimizing, both in terms of the distribution and scheduling of tasks, scenarios in which astronauts and robots accomplish a group of activities on the Moon, given an objective function (OF) and specific resources and constraints. An automated planning tool is developed as a key element of this optimization system.

  15. Playte, a tangible interface for engaging human-robot interaction

    DEFF Research Database (Denmark)

    Christensen, David Johan; Fogh, Rune; Lund, Henrik Hautop

    2014-01-01

    This paper describes a tangible interface, Playte, designed for children animating interactive robots. The system supports physical manipulation of behaviors represented by LEGO bricks and allows the user to record and train their own new behaviors. Our objective is to explore several modes of in...

  16. The NASA Human Research Wiki - An Online Collaboration Tool

    Science.gov (United States)

    Barr, Yael; Rasbury, Jack; Johnson, Jordan; Barstend, Kristina; Saile, Lynn; Watkins, Sharmi

    2012-01-01

    The Exploration Medical Capability (ExMC) element is one of six elements of the Human Research Program (HRP). ExMC is charged with decreasing the risk of: "Inability to adequately recognize or treat an ill or injured crew member" for exploration-class missions In preparation for exploration-class missions, ExMC has compiled a large evidence base, previously available only to persons within the NASA community. ExMC has developed the "NASA Human Research Wiki" in an effort to make the ExMC information available to the general public and increase collaboration within and outside of NASA. The ExMC evidence base is comprised of several types of data, including: (1)Information on more than 80 medical conditions which could occur during space flight (a)Derived from several sources (b)Including data on incidence and potential outcomes, as captured in the Integrated Medical Model s (IMM) Clinical Finding Forms (CliFFs). (2)Approximately 25 gap reports (a)Identify any "gaps" in knowledge and/or technology that would need to be addressed in order to provide adequate medical support for these novel missions.

  17. Design specifications of the Human Robotic interface for the biomimetic underwater robot "yellow submarine project"

    CERN Document Server

    Bheemaiah, Anil

    2010-01-01

    This paper describes the design of a web based multi agent design for a collision avoidance auto navigation biomimetic submarine for submarine hydroelectricity. The paper describes the nature of the map - topology interface for river bodies and the design of interactive agents for the control of the robotic submarine. The agents are migratory on the web and are designed in XML/html interface with both interactive capabilities and visibility on a map. The paper describes mathematically the user interface and the map definition languages used for the multi agent description

  18. Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations

    Science.gov (United States)

    Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana

    2011-01-01

    We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.

  19. Integration of the Multi-DOA Estimation Functionality to Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Caleb Rascon

    2015-02-01

    Full Text Available Sound source localization is important in human interaction, such as in locating the origin of long-distance calls or facing other humans while in a conversation. It is of interest to apply such functionality to the core of human-robot interaction (HRI and investigate its benefits, if any. In this paper, we propose three strategies for how to integrate the functionality of multiple directions-of-arrival (multi-DOA estimation with a common scenario, in which the robot acts as a waiter while applying audio source localization. The proposed strategies are: a the robot locates calls from users at a relatively long distance; b the robot faces the user when taking the order; and c the robot announces whether the acoustic environment is not conducive to understanding a speech command (mainly where more than one user speaks at once. It was seen that users react favourably to the functionality, and that it even has a noticeable influence on the success of the interaction.

  20. Collaboration between Supported Employment and Human Resource Services: Strategies for Success

    Science.gov (United States)

    Post, Michal; Campbell, Camille; Heinz, Tom; Kotsonas, Lori; Montgomery, Joyce; Storey, Keith

    2010-01-01

    The article presents the benefits of successful collaboration between supported employment agencies and human resource managers when working together to secure employment for individuals with disabilities. Two case studies are presented: one involving a successful collaboration with county human resource managers in negotiating a change in the…

  1. Collaboration between Supported Employment and Human Resource Services: Strategies for Success

    Science.gov (United States)

    Post, Michal; Campbell, Camille; Heinz, Tom; Kotsonas, Lori; Montgomery, Joyce; Storey, Keith

    2010-01-01

    The article presents the benefits of successful collaboration between supported employment agencies and human resource managers when working together to secure employment for individuals with disabilities. Two case studies are presented: one involving a successful collaboration with county human resource managers in negotiating a change in the…

  2. Development of an excretion care support robot with human cooperative characteristics.

    Science.gov (United States)

    Yina Wang; Shuoyu Wang

    2015-01-01

    To support care giving in an aging society with a shrinking population, various life support robots are being developed. In the authors' laboratory, an excretion care support robot (ECSR) with human cooperative characteristic has been developed to relieve the burden of caregivers and improve the quality of life for bedridden persons. This robot consists of a portable toilet with storage tank and a mobile robot which can run autonomously to conduct the cooperative work with others. Our research is focused on how to improve the motion accuracy and how the robot can cooperate with users. In this paper, to enable the ECSR could precisely move in the indoor environment, a proper controller is proposed considering the center of gravity shift and load changes. Then, to perform the cooperative task, two acceleration sensors are used to recognize the users' intended posture and position when moving from bed to toilet. The robot's target angle and position are determined by the user's posture. The effectiveness of the proposed method is verified by a pseudo excretion support experiment.

  3. Learning compliant manipulation through kinesthetic and tactile human-robot interaction.

    Science.gov (United States)

    Kronander, Klas; Billard, Aude

    2014-01-01

    Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective.

  4. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  5. EEG theta and Mu oscillations during perception of human and robot actions.

    Science.gov (United States)

    Urgen, Burcu A; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P

    2013-01-01

    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.

  6. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  7. Arousal Regulation and Affective Adaptation to Human Responsiveness by a Robot that Explores and Learns a Novel Environment

    Directory of Open Access Journals (Sweden)

    Antoine eHiolle

    2014-05-01

    Full Text Available In the context of our work in developmental robotics regarding robot-human caregiver interactions, in this paper we investigate how a ``baby'' robot that explores and learns novel environments can adapt its affective regulatory behavior of soliciting help from a ``caregiver'' to the preferences shown by the caregiver in terms of varying responsiveness. We build on two strands of previous work that assessed independently (a the differences between two ``idealized'' robot profiles -- a ``needy'' and an ``independent'' robot -- in terms of their use of a caregiver as a means to regulate the ``stress'' (arousal produced by the exploration and learning of a novel environment, and (b the effects on the robot behaviors of two caregiving profiles varying in their responsiveness -- ``responsive'' and ``non-responsive'' -- to the regulatory requests of the robot. Going beyond previous work, in this paper we (a assess the effects that the varying regulatory behavior of the two robot profiles has on the exploratory and learning patterns of the robots; (bbring together the two strands previously investigated in isolation and take a step further by endowing the robot with the capability to textit{adapt/} its regulatory behavior along the ``needy'' and ``independent'' axis as a function of the varying responsiveness of the caregiver; and (c analyze the effects that the varying regulatory behavior has on the exploratory and learning patterns of the adaptive robot.

  8. [On evaluating the robot-based experimental system for biomechanical experiment of human knee].

    Science.gov (United States)

    Deng, Guoyong; Tian, Lianfang; Bai, Bo; Sun, Hui

    2010-02-01

    This is a report on how we use the hybrid force-displacement control method to load the human knee and analyze the effect and value of our robot experimental system through the biomechanical experiments of total meniscal resection of human knee. The whole robot control system can load continuously on the specimens, thus overcoming the shortcomings of the traditional loading methods which can only load discretely. In the meantime, by using the robot-based testing system, the force (torque) of the specimens and the spatial position under the force can be measured in real-time, which overcomes the shortcomings caused by the separation of force (torque) measurement from displacement measurement and so greatly improves the measurement accuracy.

  9. Gestalt Processing in Human-Robot Interaction: A Novel Account for Autism Research

    Directory of Open Access Journals (Sweden)

    Maya Dimitrova

    2015-12-01

    Full Text Available The paper presents a novel analysis focused on showing that education is possible through robotic enhancement of the Gestalt processing in children with autism, which is not comparable to alternative educational methods such as demonstration and instruction provided solely by human tutors. The paper underlines the conceptualization of cognitive processing of holistic representations traditionally named in psychology as Gestalt structures, emerging in the process of human-robot interaction in educational settings. Two cognitive processes are proposed in the present study - bounding and unfolding - and their role in Gestalt emergence is outlined. The proposed theoretical approach explains novel findings of autistic perception and gives guidelines for design of robot-assistants to the rehabilitation process.

  10. Performance Comparison of Gender and Age Group Recognition for Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Myung-Won Lee

    2013-01-01

    Full Text Available In this paper, we focus on performance comparison of gender and age group recognition to perform robot’s application services for Human-Robot Interaction (HRI. HRI is a core technology that can naturally interact between human and robot. Among various HRI components, we concentrate audio-based techniques such as gender and age group recognition from multichannel microphones and sound board equipped withrobots. For comparative purposes, we perform the performancecomparison of Mel-Frequency Cepstral Coefficients (MFCC andLinear Prediction Coding Coefficients (LPCC in the feature extraction step, Support Vector Machine (SVM and C4.5 Decision Tree (DT in the classification step. Finally, we deal with the usefulness of gender and age group recognition for humanrobot interaction in home service robot environments.

  11. Gestalt Processing in Human-Robot Interaction: A Novel Account for Autism Research

    Directory of Open Access Journals (Sweden)

    Maya Dimitrova

    2015-12-01

    Full Text Available The paper presents a novel analysis focused on showing that education is possible through robotic enhancement of the Gestalt processing in children with autism, which is not comparable to alternative educational methods such as demonstration and instruction provided solely by human tutors. The paper underlines the conceptualization of cognitive processing of holistic representations traditionally named in psychology as Gestalt structures, emerging in the process of human-robot interaction in educational settings. Two cognitive processes are proposed in the present study - bounding and unfolding - and their role in Gestalt emergence is outlined. The proposed theoretical approach explains novel findings of autistic perception and gives guidelines for design of robot-assistants to the rehabilitation process.

  12. Using Social Robots in Health Settings: Implications of Personalization on Human-Machine Communication

    Directory of Open Access Journals (Sweden)

    Lisa Tam and Rajiv Khosla

    2016-09-01

    Full Text Available In view of the shortage of healthcare workers and a growing aging population, it is worthwhile to explore the applicability of new technologies in improving the quality of healthcare and reducing its cost. However, it remains a challenge to deploy such technologies in environments where individuals have limited knowledge about how to use them. Thus, this paper explores how the social robots designed for use in health settings in Australia have sought to overcome some of the limitations through personalization. Deployed in aged care and home-based care facilities, the social robots are person-centered, emphasizing the personalization of care with human-like attributes (e.g., human appearances to engage in reciprocal communication with users. While there have been debates over the advantages and disadvantages of personalization, this paper discusses the implications of personalization on the design of the robots for enhancing engagement, empowerment and enablement in health settings.

  13. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three- dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  14. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  15. Human capital gains associated with robotic assisted laparoscopic pyeloplasty in children compared to open pyeloplasty.

    Science.gov (United States)

    Behan, James W; Kim, Steve S; Dorey, Frederick; De Filippo, Roger E; Chang, Andy Y; Hardy, Brian E; Koh, Chester J

    2011-10-01

    Robotic assisted laparoscopic pyeloplasty is an emerging, minimally invasive alternative to open pyeloplasty in children for ureteropelvic junction obstruction. The procedure is associated with smaller incisions and shorter hospital stays. To our knowledge previous outcome analyses have not included human capital calculations, especially regarding loss of parental workdays. We compared perioperative factors in patients who underwent robotic assisted laparoscopic and open pyeloplasty at a single institution, especially in regard to human capital changes, in an institutional cost analysis. A total of 44 patients 2 years old or older from a single institution underwent robotic assisted (37) or open (7) pyeloplasty from 2008 to 2010. We retrospectively reviewed the charts to collect demographic and perioperative data. The human capital approach was used to calculate parental productivity losses. Patients who underwent robotic assisted laparoscopic pyeloplasty had a significantly shorter average hospital length of stay (1.6 vs 2.8 days, p human capital gains, eg decreased lost parental wages, and lower hospitalization expenses. Future comparative outcome analyses in children should include financial factors such as human capital loss, which can be especially important for families with young children. Copyright © 2011 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  16. Sociable Machines: Expressive Social Exchange between Humans and Robots

    Science.gov (United States)

    2000-05-01

    as opposed to traditional robot tasks such as manipulating physical objects or navigating through a cluttered space. 54 Eye, neck, j-w motors MEar...right shows the filter in operation. Typical indoor objects that may also be consistent with skin tone include wooden doors, cream walls, etc. "* r...to do so. The highest bid wins the 173 facial functions: Each subsysem makes a prioritized requa s of tfe face motor pdr ,*ee faelow W l ’WAl dale MRS

  17. Humanoid robotics and human-centered initiatives at IRI

    OpenAIRE

    Alenyà, Guillem; Hernàndez, Sergi; Andrade-Cetto, J.; Sanfeliu, Alberto; Torras, Carme

    2009-01-01

    This work was supported by projects: 'Perception, action & cognition through learning of object-action complexes.' (4915), 'Ubiquitous networking robotics in urban settings' (E-00938), 'CONSOLIDER-INGENIO 2010 Multimodal interaction in pattern recognition and computer vision' (V-00069), 'Robotica ubicua para entornos urbanos' (J-01225), 'Grup de recerca consolidat - VIS' (2005SGR-00937), 'Percepción y acción ante incertidumbre' (4803), 'Grup de recerca consolidat - ROBÒTICA' (8007), 'The huma...

  18. Robotic Platform for Automated Search and Rescue Missions of Humans

    OpenAIRE

    Eli Kolberg; Yevgeny Beiderman; Roy Talyosef; Raphi Amsalem; Javier Garcia; Zeev Zalevsky

    2013-01-01

    We present a novel type of model incorporating a special remote life signals sensing optical system on top of a controllable robotic platform. The remote sensing system consists of a laser and a camera. By properly adapting our optics and by applying a proper image processing algorithm we can sense within the field of view, illuminated by the laser and imaged by the camera, the heartbeats and the blood pulse pressure of subjects (even several simultaneously). The task is to use the developed ...

  19. Towards Human-Friendly Efficient Control of Multi-Robot Teams

    Science.gov (United States)

    Stoica, Adrian; Theodoridis, Theodoros; Barrero, David F.; Hu, Huosheng; McDonald-Maiers, Klaus

    2013-01-01

    This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context.

  20. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    Science.gov (United States)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.