WorldWideScience

Sample records for human-robot team interaction

  1. Multimodal interaction for human-robot teams

    Science.gov (United States)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  2. Modeling Leadership Styles in Human-Robot Team Dynamics

    Science.gov (United States)

    Cruz, Gerardo E.

    2005-01-01

    The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.

  3. Modeling and Simulation for Exploring Human-Robot Team Interaction Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dudenhoeffer, Donald Dean; Bruemmer, David Jonathon; Davis, Midge Lee

    2001-12-01

    Small-sized and micro-robots will soon be available for deployment in large-scale forces. Consequently, the ability of a human operator to coordinate and interact with largescale robotic forces is of great interest. This paper describes the ways in which modeling and simulation have been used to explore new possibilities for human-robot interaction. The paper also discusses how these explorations have fed implementation of a unified set of command and control concepts for robotic force deployment. Modeling and simulation can play a major role in fielding robot teams in actual missions. While live testing is preferred, limitations in terms of technology, cost, and time often prohibit extensive experimentation with physical multi-robot systems. Simulation provides insight, focuses efforts, eliminates large areas of the possible solution space, and increases the quality of actual testing.

  4. The Human-Robot Interaction Operating System

    Science.gov (United States)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  5. Forming Human-Robot Teams Across Time and Space

    Science.gov (United States)

    Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.

    2012-01-01

    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot

  6. Towards the Verification of Human-Robot Teams

    Science.gov (United States)

    Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.

    2005-01-01

    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.

  7. A Preliminary Study of Peer-to-Peer Human-Robot Interaction

    Science.gov (United States)

    Fong, Terrence; Flueckiger, Lorenzo; Kunz, Clayton; Lees, David; Schreiner, John; Siegel, Michael; Hiatt, Laura M.; Nourbakhsh, Illah; Simmons, Reid; Ambrose, Robert

    2006-01-01

    The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.

  8. Human-Robot Teaming: Communication, Coordination, and Collaboration

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    In this talk, I will describe how NASA Ames has been studying how human-robot teams can increase the performance, reduce the cost, and increase the success of a variety of endeavors. The central premise of our work is that humans and robots should support one another in order to compensate for limitations of automation and manual control. This principle has broad applicability to a wide range of domains, environments, and situations. At the same time, however, effective human-robot teaming requires communication, coordination, and collaboration -- all of which present significant research challenges. I will discuss some of the ways that NASA Ames is addressing these challenges and present examples of our work involving planetary rovers, free-flying robots, and self-driving cars.

  9. From Human-Computer Interaction to Human-Robot Social Interaction

    OpenAIRE

    Toumi, Tarek; Zidani, Abdelmadjid

    2014-01-01

    Human-Robot Social Interaction became one of active research fields in which researchers from different areas propose solutions and directives leading robots to improve their interactions with humans. In this paper we propose to introduce works in both human robot interaction and human computer interaction and to make a bridge between them, i.e. to integrate emotions and capabilities concepts of the robot in human computer model to become adequate for human robot interaction and discuss chall...

  10. Human-Robot Teams for Unknown and Uncertain Environments

    Science.gov (United States)

    Fong, Terry

    2015-01-01

    Man-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence.

  11. Socially intelligent robots: dimensions of human-robot interaction.

    Science.gov (United States)

    Dautenhahn, Kerstin

    2007-04-29

    Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.

  12. Human-Robot Teaming: From Space Robotics to Self-Driving Cars

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    In this talk, I describe how NASA Ames has been developing and testing robots for space exploration. In our research, we have focused on studying how human-robot teams can increase the performance, reduce the cost, and increase the success of space missions. A key tenet of our work is that humans and robots should support one another in order to compensate for limitations of manual control and autonomy. This principle has broad applicability beyond space exploration. Thus, I will conclude by discussing how we have worked with Nissan to apply our methods to self-driving cars, enabling humans to support autonomous vehicles operating in unpredictable and difficult situations.

  13. Human-Robot Planetary Exploration Teams

    Science.gov (United States)

    Tyree, Kimberly

    2004-01-01

    The EVA Robotic Assistant (ERA) project at NASA Johnson Space Center studies human-robot interaction and robotic assistance for future human planetary exploration. Over the past four years, the ERA project has been performing field tests with one or more four-wheeled robotic platforms and one or more space-suited humans. These tests have provided experience in how robots can assist humans, how robots and humans can communicate in remote environments, and what combination of humans and robots works best for different scenarios. The most efficient way to understand what tasks human explorers will actually perform, and how robots can best assist them, is to have human explorers and scientists go and explore in an outdoor, planetary-relevant environment, with robots to demonstrate what they are capable of, and roboticists to observe the results. It can be difficult to have a human expert itemize all the needed tasks required for exploration while sitting in a lab: humans do not always remember all the details, and experts in one arena may not even recognize that the lower level tasks they take for granted may be essential for a roboticist to know about. Field tests thus create conditions that more accurately reveal missing components and invalid assumptions, as well as allow tests and comparisons of new approaches and demonstrations of working systems. We have performed field tests in our local rock yard, in several locations in the Arizona desert, and in the Utah desert. We have tested multiple exploration scenarios, such as geological traverses, cable or solar panel deployments, and science instrument deployments. The configuration of our robot can be changed, based on what equipment is needed for a given scenario, and the sensor mast can even be placed on one of two robot bases, each with different motion capabilities. The software architecture of our robot is also designed to be as modular as possible, to allow for hardware and configuration changes. Two focus

  14. Interactive Exploration Robots: Human-Robotic Collaboration and Interactions

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.

  15. Emotion based human-robot interaction

    Directory of Open Access Journals (Sweden)

    Berns Karsten

    2018-01-01

    Full Text Available Human-machine interaction is a major challenge in the development of complex humanoid robots. In addition to verbal communication the use of non-verbal cues such as hand, arm and body gestures or mimics can improve the understanding of the intention of the robot. On the other hand, by perceiving such mechanisms of a human in a typical interaction scenario the humanoid robot can adapt its interaction skills in a better way. In this work, the perception system of two social robots, ROMAN and ROBIN of the RRLAB of the TU Kaiserslautern, is presented in the range of human-robot interaction.

  16. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  17. Human-Robot Teaming in a Multi-Agent Space Assembly Task

    Science.gov (United States)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.

  18. Human-Robot Interaction and Human Self-Realization

    DEFF Research Database (Denmark)

    Nørskov, Marco

    2014-01-01

    is to test the basis for this type of discrimination when it comes to human-robot interaction. Furthermore, the paper will take Heidegger's warning concerning technology as a vantage point and explore the possibility of human-robot interaction forming a praxis that might help humans to be with robots beyond...

  19. Movement coordination in applied human-human and human-robot interaction

    DEFF Research Database (Denmark)

    Schubö, Anna; Vesper, Cordula; Wiesbeck, Mathey

    2007-01-01

    and describing human-human interaction in terms of goal-oriented movement coordination is considered an important and necessary step for designing and describing human-robot interaction. In the present scenario, trajectories of hand and finger movements were recorded while two human participants performed......The present paper describes a scenario for examining mechanisms of movement coordination in humans and robots. It is assumed that coordination can best be achieved when behavioral rules that shape movement execution in humans are also considered for human-robot interaction. Investigating...... coordination were affected. Implications for human-robot interaction are discussed....

  20. Towards Human-Friendly Efficient Control of Multi-Robot Teams

    Science.gov (United States)

    Stoica, Adrian; Theodoridis, Theodoros; Barrero, David F.; Hu, Huosheng; McDonald-Maiers, Klaus

    2013-01-01

    This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context.

  1. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  2. Human-Robot Teaming for Hydrologic Data Gathering at Multiple Scales

    Science.gov (United States)

    Peschel, J.; Young, S. N.

    2017-12-01

    The use of personal robot-assistive technology by researchers and practitioners for hydrologic data gathering has grown in recent years as barriers to platform capability, cost, and human-robot interaction have been overcome. One consequence to this growth is a broad availability of unmanned platforms that might or might not be suitable for a specific hydrologic investigation. Through multiple field studies, a set of recommendations has been developed to help guide novice through experienced users in choosing the appropriate unmanned platforms for a given application. This talk will present a series of hydrologic data sets gathered using a human-robot teaming approach that has leveraged unmanned aerial, ground, and surface vehicles over multiple scales. The field case studies discussed will be connected to the best practices, also provided in the presentation. This talk will be of interest to geoscience researchers and practitioners, in general, as well as those working in fields related to emerging technologies.

  3. Collaborative Tools for Mixed Teams of Humans and Robots

    National Research Council Canada - National Science Library

    Bruemmer, David J; Walton, Miles C

    2003-01-01

    .... Our approach has been to consider the air vehicles, ground robots and humans as team members with different levels of authority, different communication, processing, power and mobility capabilities...

  4. Human-Robot Interaction: Status and Challenges.

    Science.gov (United States)

    Sheridan, Thomas B

    2016-06-01

    The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.

  5. User localization during human-robot interaction.

    Science.gov (United States)

    Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A

    2012-01-01

    This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.

  6. Compliance control based on PSO algorithm to improve the feeling during physical human-robot interaction.

    Science.gov (United States)

    Jiang, Zhongliang; Sun, Yu; Gao, Peng; Hu, Ying; Zhang, Jianwei

    2016-01-01

    Robots play more important roles in daily life and bring us a lot of convenience. But when people work with robots, there remain some significant differences in human-human interactions and human-robot interaction. It is our goal to make robots look even more human-like. We design a controller which can sense the force acting on any point of a robot and ensure the robot can move according to the force. First, a spring-mass-dashpot system was used to describe the physical model, and the second-order system is the kernel of the controller. Then, we can establish the state space equations of the system. In addition, the particle swarm optimization algorithm had been used to obtain the system parameters. In order to test the stability of system, the root-locus diagram had been shown in the paper. Ultimately, some experiments had been carried out on the robotic spinal surgery system, which is developed by our team, and the result shows that the new controller performs better during human-robot interaction.

  7. Interaction debugging : an integral approach to analyze human-robot interaction

    NARCIS (Netherlands)

    Kooijmans, T.; Kanda, T.; Bartneck, C.; Ishiguro, H.; Hagita, N.

    2006-01-01

    Along with the development of interactive robots, controlled experiments and field trials are regularly conducted to stage human-robot interaction. Experience in this field has shown that analyzing human-robot interaction for evaluation purposes fosters the development of improved systems and the

  8. Robotic situational awareness of actions in human teaming

    Science.gov (United States)

    Tahmoush, Dave

    2015-06-01

    When robots can sense and interpret the activities of the people they are working with, they become more of a team member and less of just a piece of equipment. This has motivated work on recognizing human actions using existing robotic sensors like short-range ladar imagers. These produce three-dimensional point cloud movies which can be analyzed for structure and motion information. We skeletonize the human point cloud and apply a physics-based velocity correlation scheme to the resulting joint motions. The twenty actions are then recognized using a nearest-neighbors classifier that achieves good accuracy.

  9. Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2009-01-01

    Abstract—This paper introduces a new method to determine a person’s pose based on laser range measurements. Such estimates are typically a prerequisite for any human-aware robot navigation, which is the basis for effective and timeextended interaction between a mobile robot and a human. The robot......’s pose. The resulting pose estimates are used to identify humans who wish to be approached and interacted with. The interaction motion of the robot is based on adaptive potential functions centered around the person that respect the persons social spaces. The method is tested in experiments...

  10. Ontological Reasoning for Human-Robot Teaming in Search and Rescue Missions

    NARCIS (Netherlands)

    Bagosi, T.; Hindriks, k.V.; Neerincx, M.A.

    2016-01-01

    In search and rescue missions robots are used to help rescue workers in exploring the disaster site. Our research focuses on how multiple robots and rescuers act as a team, and build up situation awareness. We propose a multi-agent system where each agent supports one member, either human or robot.

  11. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Mazen Ghandour

    2017-06-01

    Full Text Available In this paper, a novel approach for collision avoidance for indoor mobile robots based on human-robot interaction is realized. The main contribution of this work is a new technique for collision avoidance by engaging the human and the robot in generating new collision-free paths. In mobile robotics, collision avoidance is critical for the success of the robots in implementing their tasks, especially when the robots navigate in crowded and dynamic environments, which include humans. Traditional collision avoidance methods deal with the human as a dynamic obstacle, without taking into consideration that the human will also try to avoid the robot, and this causes the people and the robot to get confused, especially in crowded social places such as restaurants, hospitals, and laboratories. To avoid such scenarios, a reactive-supervised collision avoidance system for mobile robots based on human-robot interaction is implemented. In this method, both the robot and the human will collaborate in generating the collision avoidance via interaction. The person will notify the robot about the avoidance direction via interaction, and the robot will search for the optimal collision-free path on the selected direction. In case that no people interacted with the robot, it will select the navigation path autonomously and select the path that is closest to the goal location. The humans will interact with the robot using gesture recognition and Kinect sensor. To build the gesture recognition system, two models were used to classify these gestures, the first model is Back-Propagation Neural Network (BPNN, and the second model is Support Vector Machine (SVM. Furthermore, a novel collision avoidance system for avoiding the obstacles is implemented and integrated with the HRI system. The system is tested on H20 robot from DrRobot Company (Canada and a set of experiments were implemented to report the performance of the system in interacting with the human and avoiding

  12. Human-Robot Interaction in High Vulnerability Domains

    Science.gov (United States)

    Gore, Brian F.

    2016-01-01

    Future NASA missions will require successful integration of the human with highly complex systems. Highly complex systems are likely to involve humans, automation, and some level of robotic assistance. The complex environments will require successful integration of the human with automation, with robots, and with human-automation-robot teams to accomplish mission critical goals. Many challenges exist for the human performing in these types of operational environments with these kinds of systems. Systems must be designed to optimally integrate various levels of inputs and outputs based on the roles and responsibilities of the human, the automation, and the robots; from direct manual control, shared human-robotic control, or no active human control (i.e. human supervisory control). It is assumed that the human will remain involved at some level. Technologies that vary based on contextual demands and on operator characteristics (workload, situation awareness) will be needed when the human integrates into these systems. Predictive models that estimate the impact of the technologies on the system performance and the on the human operator are also needed to meet the challenges associated with such future complex human-automation-robot systems in extreme environments.

  13. Adaptive heterogeneous multi-robot teams

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1998-11-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail the experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.

  14. Enhancing the effectiveness of human-robot teaming with a closed-loop system.

    Science.gov (United States)

    Teo, Grace; Reinerman-Jones, Lauren; Matthews, Gerald; Szalma, James; Jentsch, Florian; Hancock, Peter

    2018-02-01

    With technological developments in robotics and their increasing deployment, human-robot teams are set to be a mainstay in the future. To develop robots that possess teaming capabilities, such as being able to communicate implicitly, the present study implemented a closed-loop system. This system enabled the robot to provide adaptive aid without the need for explicit commands from the human teammate, through the use of multiple physiological workload measures. Such measures of workload vary in sensitivity and there is large inter-individual variability in physiological responses to imposed taskload. Workload models enacted via closed-loop system should accommodate such individual variability. The present research investigated the effects of the adaptive robot aid vs. imposed aid on performance and workload. Results showed that adaptive robot aid driven by an individualized workload model for physiological response resulted in greater improvements in performance compared to aid that was simply imposed by the system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Human-Robot Teams Informed by Human Performance Moderator Functions

    Science.gov (United States)

    2012-08-29

    performance factors that affect the ability of a human to drive at night, which includes the eyesight of the driver, the fatigue level of the driver...where human factors are factors that affect the performance of an individual. 7 for human interaction. For instance, they explain the various human... affecting trust in human-robot interaction. Human Factors 53(5), 517-527 (2001) 35. Hart, S. G. and Staveland, L. E. Development of NASA-TLX (Task

  16. Accelerating Robot Development through Integral Analysis of Human-Robot Interaction

    NARCIS (Netherlands)

    Kooijmans, T.; Kanda, T.; Bartneck, C.; Ishiguro, H.; Hagita, N.

    2007-01-01

    The development of interactive robots is a complicated process, involving a plethora of psychological, technical, and contextual influences. To design a robot capable of operating "intelligently" in everyday situations, one needs a profound understanding of human-robot interaction (HRI). We propose

  17. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko

    2011-01-01

    Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed

  18. Pantomimic gestures for human-robot interaction

    CSIR Research Space (South Africa)

    Burke, Michael G

    2015-10-01

    Full Text Available -1 IEEE TRANSACTIONS ON ROBOTICS 1 Pantomimic Gestures for Human-Robot Interaction Michael Burke, Student Member, IEEE, and Joan Lasenby Abstract This work introduces a pantomimic gesture interface, which classifies human hand gestures using...

  19. Safe physical human robot interaction- past, present and future

    International Nuclear Information System (INIS)

    Pervez, Aslam; Ryu, Jeha

    2008-01-01

    When a robot physically interacts with a human user, the requirements should be drastically changed. The most important requirement is the safety of the human user in the sense that robot should not harm the human in any situation. During the last few years, research has been focused on various aspects of safe physical human robot interaction. This paper provides a review of the work on safe physical interaction of robotic systems sharing their workspace with human users (especially elderly people). Three distinct areas of research are identified: interaction safety assessment, interaction safety through design, and interaction safety through planning and control. The paper then highlights the current challenges and available technologies and points out future research directions for realization of a safe and dependable robotic system for human users

  20. Effect of cognitive biases on human-robot interaction: a case study of robot's misattribution

    OpenAIRE

    Biswas, Mriganka; Murray, John

    2014-01-01

    This paper presents a model for developing long-term human-robot interactions and social relationships based on the principle of 'human' cognitive biases applied to a robot. The aim of this work is to study how a robot influenced with human ‘misattribution’ helps to build better human-robot interactions than unbiased robots. The results presented in this paper suggest that it is important to know the effect of cognitive biases in human characteristics and interactions in order to better u...

  1. Designing, developing, and deploying systems to support human-robot teams in disaster response

    NARCIS (Netherlands)

    Kruijff, G.J.M.; Kruijff-Korbayová, I.; Keshavdas, S.; Larochelle, B.; Janíček, M.; Colas, F.; Liu, M.; Pomerleau, F.; Siegwart, R.; Neerincx, M.A.; Looije, R.; Smets, N.J.J.M.; Mioch, T.; Diggelen, J. van; Pirri, F.; Gianni, M.; Ferri, F.; Menna, M.; Worst, R.; Linder, T.; Tretyakov, V.; Surmann, H.; Svoboda, T.; Reinštein, M.; Zimmermann, K.; Petříček, T.; Hlaváč, V.

    2014-01-01

    This paper describes our experience in designing, developing and deploying systems for supporting human-robot teams during disaster response. It is based on R&D performed in the EU-funded project NIFTi. NIFTi aimed at building intelligent, collaborative robots that could work together with humans in

  2. A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  3. Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy.

    Science.gov (United States)

    Mohan, Mayumi; Mendonca, Rochelle; Johnson, Michelle J

    2017-07-01

    Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.

  4. Toward a framework for levels of robot autonomy in human-robot interaction.

    Science.gov (United States)

    Beer, Jenay M; Fisk, Arthur D; Rogers, Wendy A

    2014-07-01

    A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomous systems, influence the way in which humans and robots may interact with one another. Thus, there is a need to understand HRI by identifying variables that influence - and are influenced by - robot autonomy. Our overarching goal is to develop a framework for levels of robot autonomy in HRI. To reach this goal, the framework draws links between HRI and human-automation interaction, a field with a long history of studying and understanding human-related variables. The construct of autonomy is reviewed and redefined within the context of HRI. Additionally, the framework proposes a process for determining a robot's autonomy level, by categorizing autonomy along a 10-point taxonomy. The framework is intended to be treated as guidelines to determine autonomy, categorize the LORA along a qualitative taxonomy, and consider which HRI variables (e.g., acceptance, situation awareness, reliability) may be influenced by the LORA.

  5. Velocity-curvature patterns limit human-robot physical interaction.

    Science.gov (United States)

    Maurice, Pauline; Huber, Meghan E; Hogan, Neville; Sternad, Dagmar

    2018-01-01

    Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration.

  6. Interaction Challenges in Human-Robot Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2005-01-01

    In January 2004, NASA established a new, long-term exploration program to fulfill the President's Vision for U.S. Space Exploration. The primary goal of this program is to establish a sustained human presence in space, beginning with robotic missions to the Moon in 2008, followed by extended human expeditions to the Moon as early as 2015. In addition, the program places significant emphasis on the development of joint human-robot systems. A key difference from previous exploration efforts is that future space exploration activities must be sustainable over the long-term. Experience with the space station has shown that cost pressures will keep astronaut teams small. Consequently, care must be taken to extend the effectiveness of these astronauts well beyond their individual human capacity. Thus, in order to reduce human workload, costs, and fatigue-driven error and risk, intelligent robots will have to be an integral part of mission design.

  7. An Integrated Human System Interaction (HSI) Framework for Human-Agent Team Collaboration, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — As space missions become more complex and as mission demands increase, robots, human-robot mixed initiative teams and software autonomy applications are needed to...

  8. Context-Augmented Robotic Interaction Layer (CARIL), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — CHI Systems and the Institute for Human Machine Cognition have teamed to create a human-robot interaction system that leverages cognitive representations of shared...

  9. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of

  10. Extending NGOMSL Model for Human-Humanoid Robot Interaction in the Soccer Robotics Domain

    Directory of Open Access Journals (Sweden)

    Rajesh Elara Mohan

    2008-01-01

    Full Text Available In the field of human-computer interaction, the Natural Goals, Operators, Methods, and Selection rules Language (NGOMSL model is one of the most popular methods for modelling knowledge and cognitive processes for rapid usability evaluation. The NGOMSL model is a description of the knowledge that a user must possess to operate the system represented as elementary actions for effective usability evaluations. In the last few years, mobile robots have been exhibiting a stronger presence in commercial markets and very little work has been done with NGOMSL modelling for usability evaluations in the human-robot interaction discipline. This paper focuses on extending the NGOMSL model for usability evaluation of human-humanoid robot interaction in the soccer robotics domain. The NGOMSL modelled human-humanoid interaction design of Robo-Erectus Junior was evaluated and the results of the experiments showed that the interaction design was able to find faults in an average time of 23.84 s. Also, the interaction design was able to detect the fault within the 60 s in 100% of the cases. The Evaluated Interaction design was adopted by our Robo-Erectus Junior version of humanoid robots in the RoboCup 2007 humanoid soccer league.

  11. Mapping planetary caves with an autonomous, heterogeneous robot team

    Science.gov (United States)

    Husain, Ammar; Jones, Heather; Kannan, Balajee; Wong, Uland; Pimentel, Tiago; Tang, Sarah; Daftry, Shreyansh; Huber, Steven; Whittaker, William L.

    Caves on other planetary bodies offer sheltered habitat for future human explorers and numerous clues to a planet's past for scientists. While recent orbital imagery provides exciting new details about cave entrances on the Moon and Mars, the interiors of these caves are still unknown and not observable from orbit. Multi-robot teams offer unique solutions for exploration and modeling subsurface voids during precursor missions. Robot teams that are diverse in terms of size, mobility, sensing, and capability can provide great advantages, but this diversity, coupled with inherently distinct low-level behavior architectures, makes coordination a challenge. This paper presents a framework that consists of an autonomous frontier and capability-based task generator, a distributed market-based strategy for coordinating and allocating tasks to the different team members, and a communication paradigm for seamless interaction between the different robots in the system. Robots have different sensors, (in the representative robot team used for testing: 2D mapping sensors, 3D modeling sensors, or no exteroceptive sensors), and varying levels of mobility. Tasks are generated to explore, model, and take science samples. Based on an individual robot's capability and associated cost for executing a generated task, a robot is autonomously selected for task execution. The robots create coarse online maps and store collected data for high resolution offline modeling. The coordination approach has been field tested at a mock cave site with highly-unstructured natural terrain, as well as an outdoor patio area. Initial results are promising for applicability of the proposed multi-robot framework to exploration and modeling of planetary caves.

  12. Are You Talking to Me? Dialogue Systems Supporting Mixed Teams of Humans and Robots

    Science.gov (United States)

    Dowding, John; Clancey, William J.; Graham, Jeffrey

    2006-01-01

    This position paper describes an approach to building spoken dialogue systems for environments containing multiple human speakers and hearers, and multiple robotic speakers and hearers. We address the issue, for robotic hearers, of whether the speech they hear is intended for them, or more likely to be intended for some other hearer. We will describe data collected during a series of experiments involving teams of multiple human and robots (and other software participants), and some preliminary results for distinguishing robot-directed speech from human-directed speech. The domain of these experiments is Mars-analogue planetary exploration. These Mars-analogue field studies involve two subjects in simulated planetary space suits doing geological exploration with the help of 1-2 robots, supporting software agents, a habitat communicator and links to a remote science team. The two subjects are performing a task (geological exploration) which requires them to speak with each other while also speaking with their assistants. The technique used here is to use a probabilistic context-free grammar language model in the speech recognizer that is trained on prior robot-directed speech. Intuitively, the recognizer will give higher confidence to an utterance if it is similar to utterances that have been directed to the robot in the past.

  13. Analyzing the effects of human-aware motion planning on close-proximity human-robot collaboration.

    Science.gov (United States)

    Lasota, Przemyslaw A; Shah, Julie A

    2015-02-01

    The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human-robot interaction. We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human-robot team fluency and human worker satisfaction. Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human-robot collaboration.

  14. Human-robot interaction strategies for walker-assisted locomotion

    CERN Document Server

    Cifuentes, Carlos A

    2016-01-01

    This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed. .

  15. Dynamic perceptions of human-likeness while interacting with a social robot

    NARCIS (Netherlands)

    Ruijten, P.A.M.; Cuijpers, R.H.

    2017-01-01

    In human-robot interaction research, much attention is given to the development of socially assistive robots that can have natural interactions with their users. One crucial aspect of such natural interactions is that the robot is perceived as human-like. Much research already exists that

  16. Human-robot interaction assessment using dynamic engagement profiles

    DEFF Research Database (Denmark)

    Drimus, Alin; Poltorak, Nicole

    2017-01-01

    -1] interval, where 0 represents disengaged and 1 fully engaged. The network shows a good accuracy at recognizing the engagement state of humans given positive emotions. A time based analysis of interaction experiments between small humanoid robots and humans provides time series of engagement estimates, which...... and is applicable to humanoid robotics as well as other related contexts.......This paper addresses the use of convolutional neural networks for image analysis resulting in an engagement metric that can be used to assess the quality of human robot interactions. We propose a method based on a pretrained convolutional network able to map emotions onto a continuous [0...

  17. Learning Semantics of Gestural Instructions for Human-Robot Collaboration

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888

  18. Learning Semantics of Gestural Instructions for Human-Robot Collaboration.

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.

  19. A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    OpenAIRE

    Mavridis, Nikolaos

    2014-01-01

    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-lookin...

  20. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement

    Science.gov (United States)

    Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554

  1. Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    Science.gov (United States)

    Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer

    2011-01-01

    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator

  2. Effects of Robot Facial Characteristics and Gender in Persuasive Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Aimi S. Ghazali

    2018-06-01

    Full Text Available The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behavior was logged, and trust and reactance toward the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans is more persuasive, evokes more trust, and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans. Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest robots that are intended to influence human behavior should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot.

  3. Peer-to-Peer Human-Robot Interaction for Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2004-01-01

    NASA has embarked on a long-term program to develop human-robot systems for sustained, affordable space exploration. To support this mission, we are working to improve human-robot interaction and performance on planetary surfaces. Rather than building robots that function as glorified tools, our focus is to enable humans and robots to work as partners and peers. In this paper. we describe our approach, which includes contextual dialogue, cognitive modeling, and metrics-based field testing.

  4. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    Science.gov (United States)

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  5. Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.

    Science.gov (United States)

    Hongbo Wang; Kosuge, K

    2012-01-01

    Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.

  6. Cooperative Robot Teams Applied to the Site Preparation Task

    International Nuclear Information System (INIS)

    Parker, LE

    2001-01-01

    Prior to human missions to Mars, infrastructures on Mars that support human survival must be prepared. robotic teams can assist in these advance preparations in a number of ways. This paper addresses one of these advance robotic team tasks--the site preparation task--by proposing a control structure that allows robot teams to cooperatively solve this aspect of infrastructure preparation. A key question in this context is determining how robots should make decisions on which aspect of the site preparation t6ask to address throughout the mission, especially while operating in rough terrains. This paper describes a control approach to solving this problem that is based upon the ALLIANCE architecture, combined with performance-based rough terrain navigation that addresses path planning and control of mobile robots in rough terrain environments. They present the site preparation task and the proposed cooperative control approach, followed by some of the results of the initial testing of various aspects of the system

  7. Robot Tracking of Human Subjects in Field Environments

    Science.gov (United States)

    Graham, Jeffrey; Shillcutt, Kimberly

    2003-01-01

    Future planetary exploration will involve both humans and robots. Understanding and improving their interaction is a main focus of research in the Intelligent Systems Branch at NASA's Johnson Space Center. By teaming intelligent robots with astronauts on surface extra-vehicular activities (EVAs), safety and productivity can be improved. The EVA Robotic Assistant (ERA) project was established to study the issues of human-robot teams, to develop a testbed robot to assist space-suited humans in exploration tasks, and to experimentally determine the effectiveness of an EVA assistant robot. A companion paper discusses the ERA project in general, its history starting with ASRO (Astronaut-Rover project), and the results of recent field tests in Arizona. This paper focuses on one aspect of the research, robot tracking, in greater detail: the software architecture and algorithms. The ERA robot is capable of moving towards and/or continuously following mobile or stationary targets or sequences of targets. The contributions made by this research include how the low-level pose data is assembled, normalized and communicated, how the tracking algorithm was generalized and implemented, and qualitative performance reports from recent field tests.

  8. Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction

    OpenAIRE

    Renner, Patrick; Pfeiffer, Thies

    2014-01-01

    For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular in...

  9. Motor contagion during human-human and human-robot interaction.

    Directory of Open Access Journals (Sweden)

    Ambra Bisio

    Full Text Available Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot. After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  10. Motor contagion during human-human and human-robot interaction.

    Science.gov (United States)

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  11. The Creation of a Multi-Human, Multi-Robot Interactive Jam Session

    OpenAIRE

    Weinberg, Gil; Blosser, Brian; Mallikarjuna, Trishul; Raman, Aparna

    2009-01-01

    This paper presents an interactive and improvisational jam session, including human players and two robotic musicians. The project was developed in an effort to create novel and inspiring music through human-robot collaboration. The jam session incorporates Shimon, a newly-developed socially-interactive robotic marimba player, and Haile, a perceptual robotic percussionist developed in previous work. The paper gives an overview of the musical perception modules, adaptive improvisation modes an...

  12. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    Science.gov (United States)

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  13. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    Science.gov (United States)

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  14. Toward understanding social cues and signals in human?robot interaction: effects of robot gaze and proxemic behavior

    OpenAIRE

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relatio...

  15. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  16. Exploring cultural factors in human-robot interaction : A matter of personality?

    NARCIS (Netherlands)

    Weiss, Astrid; Evers, Vanessa

    2011-01-01

    This paper proposes an experimental study to investigate task-dependence and cultural-background dependence of the personality trait attribution on humanoid robots. In Human-Robot Interaction, as well as in Human-Agent Interaction research, the attribution of personality traits towards intelligent

  17. Human motion behavior while interacting with an industrial robot.

    Science.gov (United States)

    Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus

    2012-01-01

    Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.

  18. Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.

    Science.gov (United States)

    Qureshi, Ahmed Hussain; Nakamura, Yutaka; Yoshikawa, Yuichiro; Ishiguro, Hiroshi

    2018-03-26

    For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Simplified Human-Robot Interaction: Modeling and Evaluation

    Directory of Open Access Journals (Sweden)

    Balazs Daniel

    2013-10-01

    Full Text Available In this paper a novel concept of human-robot interaction (HRI modeling is proposed. Including factors like trust in automation, situational awareness, expertise and expectations a new user experience framework is formed for industrial robots. Service Oriented Robot Operation, proposed in a previous paper, creates an abstract level in HRI and it is also included in the framework. This concept is evaluated with exhaustive tests. Results prove that significant improvement in task execution may be achieved and the new system is more usable for operators with less experience with robotics; personnel specific for small and medium enterprises (SMEs.

  20. Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction.

    Science.gov (United States)

    Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank

    2017-12-14

    Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.

  1. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    Directory of Open Access Journals (Sweden)

    Felipe Cid

    2014-04-01

    Full Text Available This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System, the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.

  2. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    Science.gov (United States)

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  3. Towards understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior

    Directory of Open Access Journals (Sweden)

    Stephen M. Fiore

    2013-11-01

    Full Text Available As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI. We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  4. Exploiting Child-Robot Aesthetic Interaction for a Social Robot

    OpenAIRE

    Lee, Jae-Joon; Kim, Dae-Won; Kang, Bo-Yeong

    2012-01-01

    A social robot interacts and communicates with humans by using the embodied knowledge gained from interactions with its social environment. In recent years, emotion has emerged as a popular concept for designing social robots. Several studies on social robots reported an increase in robot sociability through emotional imitative interactions between the robot and humans. In this paper conventional emotional interactions are extended by exploiting the aesthetic theories that the sociability of ...

  5. Ghost-in-the-Machine reveals human social signals for human-robot interaction.

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P

    2015-01-01

    We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.

  6. Distributed control of multi-robot teams: Cooperative baton passing task

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1998-11-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, they describe the implementation of this architecture on a team of physical mobile robots performing a cooperative baton passing task. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes during the task.

  7. Semiotics and Human-Robot Interaction

    OpenAIRE

    Sequeira, Joao; Ribeiro, M.Isabel

    2007-01-01

    The social barriers that still constrain the use of robots in modern societies will tend to vanish with the sophistication increase of interaction strategies. Communication and interaction between people and robots occurring in a friendly manner and being accessible to everyone, independent of their skills in robotics issues, will certainly foster the breaking of barriers. Socializing behaviors, such as following people, are relatively easy to obtain with current state of the art robotics. Ho...

  8. Interaction with Soft Robotic Tentacles

    DEFF Research Database (Denmark)

    Jørgensen, Jonas

    2018-01-01

    Soft robotics technology has been proposed for a number of applications that involve human-robot interaction. In this tabletop demonstration it is possible to interact with two soft robotic platforms that have been used in human-robot interaction experiments (also accepted to HRI'18 as a Late...

  9. Abstract robots with an attitude : applying interpersonal relation models to human-robot interaction

    NARCIS (Netherlands)

    Hiah, J.L.; Beursgens, L.; Haex, R.; Perez Romero, L.M.; Teh, Y.; Bhomer, ten M.; Berkel, van R.E.A.; Barakova, E.I.

    2013-01-01

    This paper explores new possibilities for social interaction between a human user and a robot with an abstract shape. The social interaction takes place by simulating behaviors such as submissiveness and dominance and analyzing the corresponding human reactions. We used an object that has no

  10. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  11. A Social Cognitive Neuroscience Stance on Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Chaminade Thierry

    2011-12-01

    Full Text Available Robotic devices, thanks to the controlled variations in their appearance and behaviors, provide useful tools to test hypotheses pertaining to social interactions. These agents were used to investigate one theoretical framework, resonance, which is defined, at the behavioral and neural levels, as an overlap between first- and third- person representations of mental states such as motor intentions or emotions. Behaviorally, we found a reduced, but significant, resonance towards a humanoid robot displaying biological motion, compared to a human. Using neuroimaging, we've reported that while perceptual processes in the human occipital and temporal lobe are more strongly engaged when perceiving a humanoid robot than a human action, activity in areas involved in motor resonance depends on attentional modulation for artificial agent more strongly than for human agents. Altogether, these studies using artificial agents offer valuable insights into the interaction of bottom-up and top-down processes in the perception of artificial agents.

  12. The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.

    Science.gov (United States)

    Arnold, Thomas; Scheutz, Matthias

    2017-06-01

    Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.

  13. Warning Signals for Poor Performance Improve Human-Robot Interaction

    NARCIS (Netherlands)

    van den Brule, Rik; Bijlstra, Gijsbert; Dotsch, Ron; Haselager, Pim; Wigboldus, Daniel HJ

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot’s nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the

  14. Mobile app for human-interaction with sitter robots

    Science.gov (United States)

    Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.

    2017-05-01

    Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that

  15. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  16. Human-Robot Site Survey and Sampling for Space Exploration

    Science.gov (United States)

    Fong, Terrence; Bualat, Maria; Edwards, Laurence; Flueckiger, Lorenzo; Kunz, Clayton; Lee, Susan Y.; Park, Eric; To, Vinh; Utz, Hans; Ackner, Nir

    2006-01-01

    NASA is planning to send humans and robots back to the Moon before 2020. In order for extended missions to be productive, high quality maps of lunar terrain and resources are required. Although orbital images can provide much information, many features (local topography, resources, etc) will have to be characterized directly on the surface. To address this need, we are developing a system to perform site survey and sampling. The system includes multiple robots and humans operating in a variety of team configurations, coordinated via peer-to-peer human-robot interaction. In this paper, we present our system design and describe planned field tests.

  17. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    Science.gov (United States)

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  18. Intelligent Interaction for Human-Friendly Service Robot in Smart House Environment

    Directory of Open Access Journals (Sweden)

    Z. Zenn Bien

    2008-01-01

    Full Text Available The smart house under consideration is a service-integrated complex system to assist older persons and/or people with disabilities. The primary goal of the system is to achieve independent living by various robotic devices and systems. Such a system is treated as a human-in-the loop system in which human- robot interaction takes place intensely and frequently. Based on our experiences of having designed and implemented a smart house environment, called Intelligent Sweet Home (ISH, we present a framework of realizing human-friendly HRI (human-robot interaction module with various effective techniques of computational intelligence. More specifically, we partition the robotic tasks of HRI module into three groups in consideration of the level of specificity, fuzziness or uncertainty of the context of the system, and present effective interaction method for each case. We first show a task planning algorithm and its architecture to deal with well-structured tasks autonomously by a simplified set of commands of the user instead of inconvenient manual operations. To provide with capability of interacting in a human-friendly way in a fuzzy context, it is proposed that the robot should make use of human bio-signals as input of the HRI module as shown in a hand gesture recognition system, called a soft remote control system. Finally we discuss a probabilistic fuzzy rule-based life-long learning system, equipped with intention reading capability by learning human behavioral patterns, which is introduced as a solution in uncertain and time-varying situations.

  19. An Experimental Study of Embodied Interaction and Human Perception of Social Presence for Interactive Robots in Public Settings

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Heath, Damith; Vlachos, Evgenios

    2018-01-01

    The human perception of cognitive robots as social depends on many factors, including those that do not necessarily pertain to a robot’s cognitive functioning. Experience Design offers a useful framework for evaluating when participants interact with robots as products or tools and when they regard...... them as social actors. This study describes a between-participants experiment conducted at a science museum, where visitors were invited to play a game of noughts and crosses with a Baxter robot. The goal is to foster meaningful interactions that promote engagement between the human and robot...... in a museum context. Using an Experience Design framework, we tested the robot in three different conditions to better understand which factors contribute to the perception of robots as social. The experiment also outlines best practices for conducting human-robot interaction research in museum exhibitions...

  20. Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human–Robot Collaboration

    Science.gov (United States)

    Shah, Julie A.

    2015-01-01

    Objective: The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. Background: The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction. Method: We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. Results: When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. Conclusion: People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human–robot team fluency and human worker satisfaction. Application: Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human–robot collaboration. PMID:25790568

  1. Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action

    Science.gov (United States)

    Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra

    2014-01-01

    Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212

  2. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development

    Directory of Open Access Journals (Sweden)

    Shanee Honig

    2018-06-01

    Full Text Available While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI, and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI, human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP, that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1 communicating failures, (2 perception and comprehension of failures, and (3 solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a

  3. Autonomous mobile robot teams

    Science.gov (United States)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  4. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Science.gov (United States)

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  5. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Directory of Open Access Journals (Sweden)

    Joachim de Greeff

    Full Text Available Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference; the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  6. Advancing the Strategic Messages Affecting Robot Trust Effect: The Dynamic of User- and Robot-Generated Content on Human-Robot Trust and Interaction Outcomes.

    Science.gov (United States)

    Liang, Yuhua Jake; Lee, Seungcheol Austin

    2016-09-01

    Human-robot interaction (HRI) will soon transform and shift the communication landscape such that people exchange messages with robots. However, successful HRI requires people to trust robots, and, in turn, the trust affects the interaction. Although prior research has examined the determinants of human-robot trust (HRT) during HRI, no research has examined the messages that people received before interacting with robots and their effect on HRT. We conceptualize these messages as SMART (Strategic Messages Affecting Robot Trust). Moreover, we posit that SMART can ultimately affect actual HRI outcomes (i.e., robot evaluations, robot credibility, participant mood) by affording the persuasive influences from user-generated content (UGC) on participatory Web sites. In Study 1, participants were assigned to one of two conditions (UGC/control) in an original experiment of HRT. Compared with the control (descriptive information only), results showed that UGC moderated the correlation between HRT and interaction outcomes in a positive direction (average Δr = +0.39) for robots as media and robots as tools. In Study 2, we explored the effect of robot-generated content but did not find similar moderation effects. These findings point to an important empirical potential to employ SMART in future robot deployment.

  7. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, ...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots....

  8. Physical human-robot interaction of an active pelvis orthosis: toward ergonomic assessment of wearable robots.

    Science.gov (United States)

    d'Elia, Nicolò; Vanetti, Federica; Cempini, Marco; Pasquini, Guido; Parri, Andrea; Rabuffetti, Marco; Ferrarin, Maurizio; Molino Lova, Raffaele; Vitiello, Nicola

    2017-04-14

    In human-centered robotics, exoskeletons are becoming relevant for addressing needs in the healthcare and industrial domains. Owing to their close interaction with the user, the safety and ergonomics of these systems are critical design features that require systematic evaluation methodologies. Proper transfer of mechanical power requires optimal tuning of the kinematic coupling between the robotic and anatomical joint rotation axes. We present the methods and results of an experimental evaluation of the physical interaction with an active pelvis orthosis (APO). This device was designed to effectively assist in hip flexion-extension during locomotion with a minimum impact on the physiological human kinematics, owing to a set of passive degrees of freedom for self-alignment of the human and robotic hip flexion-extension axes. Five healthy volunteers walked on a treadmill at different speeds without and with the APO under different levels of assistance. The user-APO physical interaction was evaluated in terms of: (i) the deviation of human lower-limb joint kinematics when wearing the APO with respect to the physiological behavior (i.e., without the APO); (ii) relative displacements between the APO orthotic shells and the corresponding body segments; and (iii) the discrepancy between the kinematics of the APO and the wearer's hip joints. The results show: (i) negligible interference of the APO in human kinematics under all the experimented conditions; (ii) small (i.e., ergonomics assessment of wearable robots.

  9. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction

    Science.gov (United States)

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children’s social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a “mental model” of the robot, tailoring the tutoring to the robot’s performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot’s bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance. PMID:26422143

  10. Mission Reliability Estimation for Repairable Robot Teams

    Science.gov (United States)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  11. L-ALLIANCE: a mechanism for adaptive action selection in heterogeneous multi-robot teams

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1995-11-01

    In practical applications of robotics, it is usually quite difficult, if not impossible, for the system designer to fully predict the environmental states in which the robots will operate. The complexity of the problem is further increased when dealing with teams of robots which themselves may be incompletely known and characterized in advance. It is thus highly desirable for robot teams to be able to adapt their performance during the mission due to changes in the environment, or to changes in other robot team members. In previous work, we introduced a behavior-based mechanism called the ALLIANCE architecture -- that facilitates the fault tolerant cooperative control of multi-robot teams. However, this previous work did not address the issue of how to dynamically update the control parameters during a mission to adapt to ongoing changes in the environment or in the robot team, and to ensure the efficiency of the collective team actions. In this paper, we address this issue by proposing the L-ALLIANCE mechanism, which defines an automated method whereby robots can use knowledge learned from previous experience to continually improve their collective action selection when working on missions composed of loosely coupled, discrete subtasks. This ability to dynamically update robotic control parameters provides a number of distinct advantages: it alleviates the need for human tuning of control parameters, it facilitates the use of custom-designed multi-robot teams for any given application, it improves the efficiency of the mission performance, and It allows robots to continually adapt their performance over time due to changes in the robot team and/or the environment. We describe the L-ALLIANCE mechanism, present the results of various alternative update strategies we investigated, present the formal model of the L-ALLIANCE mechanism, and present the results of a simple proof of concept implementation on a small team of heterogeneous mobile robots.

  12. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand.

    Science.gov (United States)

    Kent, Benjamin A; Engeberg, Erik D

    2014-11-07

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques.

  13. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand

    International Nuclear Information System (INIS)

    Kent, Benjamin A; Engeberg, Erik D

    2014-01-01

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques. (paper)

  14. Common Metrics for Human-Robot Interaction

    Science.gov (United States)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  15. Negative Affect in Human Robot Interaction

    DEFF Research Database (Denmark)

    Rehm, Matthias; Krogsager, Anders

    2013-01-01

    The vision of social robotics sees robots moving more and more into unrestricted social environments, where robots interact closely with users in their everyday activities, maybe even establishing relationships with the user over time. In this paper we present a field trial with a robot in a semi...

  16. Enhancing Perception with Tactile Object Recognition in Adaptive Grippers for Human-Robot Interaction.

    Science.gov (United States)

    Gandarias, Juan M; Gómez-de-Gabriel, Jesús M; García-Cerezo, Alfonso J

    2018-02-26

    The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human-robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor.

  17. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-03-01

    Full Text Available NASA's vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  18. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-11-01

    Full Text Available NASA?s vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  19. Sediment Sampling in Estuarine Mudflats with an Aerial-Ground Robotic Team

    Directory of Open Access Journals (Sweden)

    Pedro Deusdado

    2016-09-01

    Full Text Available This paper presents a robotic team suited for bottom sediment sampling and retrieval in mudflats, targeting environmental monitoring tasks. The robotic team encompasses a four-wheel-steering ground vehicle, equipped with a drilling tool designed to be able to retain wet soil, and a multi-rotor aerial vehicle for dynamic aerial imagery acquisition. On-demand aerial imagery, properly fused on an aerial mosaic, is used by remote human operators for specifying the robotic mission and supervising its execution. This is crucial for the success of an environmental monitoring study, as often it depends on human expertise to ensure the statistical significance and accuracy of the sampling procedures. Although the literature is rich on environmental monitoring sampling procedures, in mudflats, there is a gap as regards including robotic elements. This paper closes this gap by also proposing a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Field trials in the south bank of the river Tagus’ estuary show the ability of the robotic system to successfully extract and transport bottom sediment samples for offline analysis. The results also show the efficiency of the extraction and the benefits when compared to (conventional human-based sampling.

  20. Sediment Sampling in Estuarine Mudflats with an Aerial-Ground Robotic Team

    Science.gov (United States)

    Deusdado, Pedro; Guedes, Magno; Silva, André; Marques, Francisco; Pinto, Eduardo; Rodrigues, Paulo; Lourenço, André; Mendonça, Ricardo; Santana, Pedro; Corisco, José; Almeida, Susana Marta; Portugal, Luís; Caldeira, Raquel; Barata, José; Flores, Luis

    2016-01-01

    This paper presents a robotic team suited for bottom sediment sampling and retrieval in mudflats, targeting environmental monitoring tasks. The robotic team encompasses a four-wheel-steering ground vehicle, equipped with a drilling tool designed to be able to retain wet soil, and a multi-rotor aerial vehicle for dynamic aerial imagery acquisition. On-demand aerial imagery, properly fused on an aerial mosaic, is used by remote human operators for specifying the robotic mission and supervising its execution. This is crucial for the success of an environmental monitoring study, as often it depends on human expertise to ensure the statistical significance and accuracy of the sampling procedures. Although the literature is rich on environmental monitoring sampling procedures, in mudflats, there is a gap as regards including robotic elements. This paper closes this gap by also proposing a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Field trials in the south bank of the river Tagus’ estuary show the ability of the robotic system to successfully extract and transport bottom sediment samples for offline analysis. The results also show the efficiency of the extraction and the benefits when compared to (conventional) human-based sampling. PMID:27618060

  1. Multi-Axis Force Sensor for Human-Robot Interaction Sensing in a Rehabilitation Robotic Device.

    Science.gov (United States)

    Grosu, Victor; Grosu, Svetlana; Vanderborght, Bram; Lefeber, Dirk; Rodriguez-Guerrero, Carlos

    2017-06-05

    Human-robot interaction sensing is a compulsory feature in modern robotic systems where direct contact or close collaboration is desired. Rehabilitation and assistive robotics are fields where interaction forces are required for both safety and increased control performance of the device with a more comfortable experience for the user. In order to provide an efficient interaction feedback between the user and rehabilitation device, high performance sensing units are demanded. This work introduces a novel design of a multi-axis force sensor dedicated for measuring pelvis interaction forces in a rehabilitation exoskeleton device. The sensor is conceived such that it has different sensitivity characteristics for the three axes of interest having also movable parts in order to allow free rotations and limit crosstalk errors. Integrated sensor electronics make it easy to acquire and process data for a real-time distributed system architecture. Two of the developed sensors are integrated and tested in a complex gait rehabilitation device for safe and compliant control.

  2. Systematic Analysis of Video Data from Different Human-Robot Interaction Studies: A Categorisation of Social Signals During Error Situations

    OpenAIRE

    Manuel eGiuliani; Nicole eMirnig; Gerald eStollnberger; Susanne eStadler; Roland eBuchner; Manfred eTscheligi

    2015-01-01

    Human?robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human?robot interaction experiments. For that, we analyzed 201 videos of five human?robot interaction user studies with varying tasks from four independent projects. The analysis shows tha...

  3. The ethics of human-robot relationships

    NARCIS (Netherlands)

    de Graaf, M.M.A.

    2015-01-01

    Currently, human-robot interactions are constructed according to the rules of human-human interactions inviting users to interact socially with robots. Is there something morally wrong with deceiving humans into thinking they can foster meaningful interactions with a technological object? Or is this

  4. Estimation of Physical Human-Robot Interaction Using Cost-Effective Pneumatic Padding

    Directory of Open Access Journals (Sweden)

    André Wilkening

    2016-08-01

    Full Text Available The idea to use a cost-effective pneumatic padding for sensing of physical interaction between a user and wearable rehabilitation robots is not new, but until now there has not been any practical relevant realization. In this paper, we present a novel method to estimate physical human-robot interaction using a pneumatic padding based on artificial neural networks (ANNs. This estimation can serve as rough indicator of applied forces/torques by the user and can be applied for visual feedback about the user’s participation or as additional information for interaction controllers. Unlike common mostly very expensive 6-axis force/torque sensors (FTS, the proposed sensor system can be easily integrated in the design of physical human-robot interfaces of rehabilitation robots and adapts itself to the shape of the individual patient’s extremity by pressure changing in pneumatic chambers, in order to provide a safe physical interaction with high user’s comfort. This paper describes a concept of using ANNs for estimation of interaction forces/torques based on pressure variations of eight customized air-pad chambers. The ANNs were trained one-time offline using signals of a high precision FTS which is also used as reference sensor for experimental validation. Experiments with three different subjects confirm the functionality of the concept and the estimation algorithm.

  5. In our own image? Emotional and neural processing differences when observing human-human vs human-robot interactions.

    Science.gov (United States)

    Wang, Yin; Quadflieg, Susanne

    2015-11-01

    Notwithstanding the significant role that human-robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human-human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal-parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. © The Author (2015). Published by Oxford University Press.

  6. A meta-analysis of factors affecting trust in human-robot interaction.

    Science.gov (United States)

    Hancock, Peter A; Billings, Deborah R; Schaefer, Kristin E; Chen, Jessie Y C; de Visser, Ewart J; Parasuraman, Raja

    2011-10-01

    We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.

  7. Perceptions of a Soft Robotic Tentacle in Interaction

    DEFF Research Database (Denmark)

    Jørgensen, Jonas

    2018-01-01

    Soft robotics technology has been proposed for a number of applications that involve human-robot interaction. This video documents a platform created to explore human perceptions of soft robots in interaction. The video presents select footage from an interaction experiment conducted...

  8. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction

    Directory of Open Access Journals (Sweden)

    Abdulaziz Abubshait

    2017-08-01

    Full Text Available Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human–robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human–robot interaction. We examine this question by manipulating agent appearance (human vs. robot and behavior (reliable vs. random within the same paradigm and examine how congruent (human/reliable vs. robot/random versus incongruent (human/random vs. robot/reliable combinations of these triggers affect performance (i.e., gaze following and attitudes (i.e., agent ratings in human–robot interaction. The results show that both appearance and behavior affect human–robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human–robot interaction are discussed.

  9. Affect in Human-Robot Interaction

    Science.gov (United States)

    2014-01-01

    Werry, I., Rae, J., Dickerson, P., Stribling, P., & Ogden, B. (2002). Robotic Playmates: Analysing Interactive Competencies of Children with Autism ...WE-4RII. IEEE International Conference on Intelligent Robots and Systems, Edmonton, Canada. 35. Moravec, H. (1988). Mind Children : The Future of...and if so when and where? • What approaches, theories , representations, and experimental methods inform affective HRI research? Report Documentation

  10. Turn-Taking Based on Information Flow for Fluent Human-Robot Interaction

    OpenAIRE

    Thomaz, Andrea L.; Chao, Crystal

    2011-01-01

    Turn-taking is a fundamental part of human communication. Our goal is to devise a turn-taking framework for human-robot interaction that, like the human skill, represents something fundamental about interaction, generic to context or domain. We propose a model of turn-taking, and conduct an experiment with human subjects to inform this model. Our findings from this study suggest that information flow is an integral part of human floor-passing behavior. Following this, we implement autonomous ...

  11. Visual exploration and analysis of human-robot interaction rules

    Science.gov (United States)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming

  12. Ghost-in-the-Machine reveals human social signals for human–robot interaction

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P.

    2015-01-01

    We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience. PMID:26582998

  13. How do walkers behave when crossing the way of a mobile robot that replicates human interaction rules?

    Science.gov (United States)

    Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien

    2018-02-01

    Previous studies showed the existence of implicit interaction rules shared by human walkers when crossing each other. Especially, each walker contributes to the collision avoidance task and the crossing order, as set at the beginning, is preserved along the interaction. This order determines the adaptation strategy: the first arrived increases his/her advance by slightly accelerating and changing his/her heading, whereas the second one slows down and moves in the opposite direction. In this study, we analyzed the behavior of human walkers crossing the trajectory of a mobile robot that was programmed to reproduce this human avoidance strategy. In contrast with a previous study, which showed that humans mostly prefer to give the way to a non-reactive robot, we observed similar behaviors between human-human avoidance and human-robot avoidance when the robot replicates the human interaction rules. We discuss this result in relation with the importance of controlling robots in a human-like way in order to ease their cohabitation with humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  15. Pragmatic Frames for Teaching and Learning in Human-Robot Interaction: Review and Challenges.

    Science.gov (United States)

    Vollmer, Anna-Lisa; Wrede, Britta; Rohlfing, Katharina J; Oudeyer, Pierre-Yves

    2016-01-01

    One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning-teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human-human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching.

  16. Human-Automation Allocations for Current Robotic Space Operations

    Science.gov (United States)

    Marquez, Jessica J.; Chang, Mai L.; Beard, Bettina L.; Kim, Yun Kyung; Karasinski, John A.

    2018-01-01

    Within the Human Research Program, one risk delineates the uncertainty surrounding crew working with automation and robotics in spaceflight. The Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is concerned with the detrimental effects on crew performance due to ineffective user interfaces, system designs and/or functional task allocation, potentially compromising mission success and safety. Risk arises because we have limited experience with complex automation and robotics. One key gap within HARI, is the gap related to functional allocation. The gap states: We need to evaluate, develop, and validate methods and guidelines for identifying human-automation/robot task information needs, function allocation, and team composition for future long duration, long distance space missions. Allocations determine the human-system performance as it identifies the functions and performance levels required by the automation/robotic system, and in turn, what work the crew is expected to perform and the necessary human performance requirements. Allocations must take into account each of the human, automation, and robotic systems capabilities and limitations. Some functions may be intuitively assigned to the human versus the robot, but to optimize efficiency and effectiveness, purposeful role assignments will be required. The role of automation and robotics will significantly change in future exploration missions, particularly as crew becomes more autonomous from ground controllers. Thus, we must understand the suitability of existing function allocation methods within NASA as well as the existing allocations established by the few robotic systems that are operational in spaceflight. In order to evaluate future methods of robotic allocations, we must first benchmark the allocations and allocation methods that have been used. We will present 1) documentation of human-automation-robotic allocations in existing, operational spaceflight systems; and 2) To

  17. Generating Self-Reliant Teams of Autonomous Cooperating Robots: Desired design Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1999-05-01

    The difficulties in designing a cooperative team are significant. Several of the key questions that must be resolved when designing a cooperative control architecture include: How do we formulate, describe, decompose, and allocate problems among a group of intelligent agents? How do we enable agents to communicate and interact? How do we ensure that agents act coherently in their actions? How do we allow agents to recognize and reconcile conflicts? However, in addition to these key issues, the software architecture must be designed to enable multi-robot teams to be robust, reliable, and flexible. Without these capabilities, the resulting robot team will not be able to successfully deal with the dynamic and uncertain nature of the real world. In this extended abstract, we first describe these desired capabilities. We then briefly describe the ALLIANCE software architecture that we have previously developed for multi-robot cooperation. We then briefly analyze the ALLIANCE architecture in terms of the desired design qualities identified.

  18. Social robots from a human perspective

    CERN Document Server

    Taipale, Sakari; Sapio, Bartolomeo; Lugano, Giuseppe; Fortunati, Leopoldina

    2015-01-01

    Addressing several issues that explore the human side of social robots, this book asks from a social and human scientific perspective what a social robot is and how we might come to think about social robots in the different areas of everyday life. Organized around three sections that deal with Perceptions and Attitudes to Social Robots, Human Interaction with Social Robots, and Social Robots in Everyday Life, the book explores the idea that even if technical problems related to robot technologies can be continuously solved from a machine perspective, what kind of machine do we want to have and use in our daily lives? Experiences from previously widely adopted technologies, such smartphones, hint that robot technologies could potentially be absorbed into the everyday lives of humans in such a way that it is the human that determines the human-machine interaction. In a similar way to how today’s information and communication technologies were first designed for professional/industrial use, but which soon wer...

  19. Interacting With Robots to Investigate the Bases of Social Interaction.

    Science.gov (United States)

    Sciutti, Alessandra; Sandini, Giulio

    2017-12-01

    Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our

  20. Classifying a Person's Degree of Accessibility From Natural Body Language During Social Human-Robot Interactions.

    Science.gov (United States)

    McColl, Derek; Jiang, Chuan; Nejat, Goldie

    2017-02-01

    For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.

  1. Applications of artificial intelligence in safe human-robot interactions.

    Science.gov (United States)

    Najmaei, Nima; Kermani, Mehrdad R

    2011-04-01

    The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.

  2. Toward a unified method for analysing and teaching Human Robot Interaction

    DEFF Research Database (Denmark)

    Dinesen, Jens Vilhelm

    , drawing on key theories and methods from both communications- and interaction-theory. The aim is to provide a single unified method for analysing interaction, through means of video analysis and then applying theories, with proven mutual compatibility, to reach a desired granularity of study.......This abstract aims to present key aspect of a future paper, which outlines the ongoing development ofa unified method for analysing and teaching Human-Robot-Interaction. The paper will propose a novel method for analysing both HRI, interaction with other forms of technologies and fellow humans...

  3. Cognitive Emotional Regulation Model in Human-Robot Interaction

    OpenAIRE

    Liu, Xin; Xie, Lun; Liu, Anqi; Li, Dan

    2015-01-01

    This paper integrated Gross cognitive process into the HMM (hidden Markov model) emotional regulation method and implemented human-robot emotional interaction with facial expressions and behaviors. Here, energy was the psychological driving force of emotional transition in the cognitive emotional model. The input facial expression was translated into external energy by expression-emotion mapping. Robot’s next emotional state was determined by the cognitive energy (the stimulus after cognition...

  4. Multi-robot team design for real-world applications

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1996-10-01

    Many of these applications are in dynamic environments requiring capabilities distributed in functionality, space, or time, and therefore often require teams of robots to work together. While much research has been done in recent years, current robotics technology is still far from achieving many of the real world applications. Two primary reasons for this technology gap are that (1) previous work has not adequately addressed the issues of fault tolerance and adaptivity in multi-robot teams, and (2) existing robotics research is often geared at specific applications and is not easily generalized to different, but related, applications. This paper addresses these issues by first describing the design issues of key importance in these real-world cooperative robotics applications: fault tolerance, reliability, adaptivity, and coherence. We then present a general architecture addressing these design issues (called ALLIANCE) that facilities multi-robot cooperation of small- to medium-sized teams in dynamic environments, performing missions composed of loosely coupled subtasks. We illustrate an implementation of ALLIANCE in a real-world application, called Bounding Overwatch, and then discuss how this architecture addresses our key design issues.

  5. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    Science.gov (United States)

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  6. Anthropomorphism in Human-Robot Co-evolution.

    Science.gov (United States)

    Damiano, Luisa; Dumouchel, Paul

    2018-01-01

    Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents - social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots "social presence" and "social behaviors" that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of 'applied anthropomorphism' as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a "cheating" technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns "anthropomorphism-based" social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, "synthetic ethics," which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.

  7. Gestalt Processing in Human-Robot Interaction: A Novel Account for Autism Research

    Directory of Open Access Journals (Sweden)

    Maya Dimitrova

    2015-12-01

    Full Text Available The paper presents a novel analysis focused on showing that education is possible through robotic enhancement of the Gestalt processing in children with autism, which is not comparable to alternative educational methods such as demonstration and instruction provided solely by human tutors. The paper underlines the conceptualization of cognitive processing of holistic representations traditionally named in psychology as Gestalt structures, emerging in the process of human-robot interaction in educational settings. Two cognitive processes are proposed in the present study - bounding and unfolding - and their role in Gestalt emergence is outlined. The proposed theoretical approach explains novel findings of autistic perception and gives guidelines for design of robot-assistants to the rehabilitation process.

  8. I Show You How I Like You: Emotional Human-Robot Interaction through Facial Expression and Tactile Stimulation

    DEFF Research Database (Denmark)

    Canamero, Dolores; Fredslund, Jacob

    2001-01-01

    We report work on a LEGO robot that displays different emotional expressions in response to physical stimulation, for the purpose of social interaction with humans. This is a first step toward our longer-term goal of exploring believable emotional exchanges to achieve plausible interaction...... with a simple robot. Drawing inspiration from theories of human basic emotions, we implemented several prototypical expressions in the robot's caricatured face and conducted experiments to assess the recognizability of these expressions...

  9. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics

    Directory of Open Access Journals (Sweden)

    Philipp Beckerle

    2017-05-01

    Full Text Available Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  10. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D.; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions. PMID:28588473

  11. Hands Off: Mentoring a Student-Led Robotics Team

    Science.gov (United States)

    Dolenc, Nathan R.; Mitchell, Claire E.; Tai, Robert H.

    2016-01-01

    Mentors play important roles in determining the working environment of out-of-school-time clubs. On robotics teams, they provide guidance in hopes that their protégés progress through an engineering process. This study examined how mentors on one robotics team who defined their mentoring style as "let the students do the work" navigated…

  12. The relation between people's attitudes and anxiety towards robots in human-robot interaction

    NARCIS (Netherlands)

    de Graaf, M.M.A.; Ben Allouch, Soumaya

    2013-01-01

    This paper examines the relation between an interaction with a robot and peoples’ attitudes and emotion towards robots. In our study, participants have had an acquaintance talk with a social robot and both their general attitude and anxiety towards social robots were measured before and after the

  13. Teaching Human Poses Interactively to a Social Robot

    Science.gov (United States)

    Gonzalez-Pacheco, Victor; Malfaz, Maria; Fernandez, Fernando; Salichs, Miguel A.

    2013-01-01

    The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher's explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics. PMID:24048336

  14. Teaching Human Poses Interactively to a Social Robot

    Directory of Open Access Journals (Sweden)

    Miguel A. Salichs

    2013-09-01

    Full Text Available The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher’s explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics.

  15. Adaptive training algorithm for robot-assisted upper-arm rehabilitation, applicable to individualised and therapeutic human-robot interaction.

    Science.gov (United States)

    Chemuturi, Radhika; Amirabdollahian, Farshid; Dautenhahn, Kerstin

    2013-09-28

    Rehabilitation robotics is progressing towards developing robots that can be used as advanced tools to augment the role of a therapist. These robots are capable of not only offering more frequent and more accessible therapies but also providing new insights into treatment effectiveness based on their ability to measure interaction parameters. A requirement for having more advanced therapies is to identify how robots can 'adapt' to each individual's needs at different stages of recovery. Hence, our research focused on developing an adaptive interface for the GENTLE/A rehabilitation system. The interface was based on a lead-lag performance model utilising the interaction between the human and the robot. The goal of the present study was to test the adaptability of the GENTLE/A system to the performance of the user. Point-to-point movements were executed using the HapticMaster (HM) robotic arm, the main component of the GENTLE/A rehabilitation system. The points were displayed as balls on the screen and some of the points also had a real object, providing a test-bed for the human-robot interaction (HRI) experiment. The HM was operated in various modes to test the adaptability of the GENTLE/A system based on the leading/lagging performance of the user. Thirty-two healthy participants took part in the experiment comprising of a training phase followed by the actual-performance phase. The leading or lagging role of the participant could be used successfully to adjust the duration required by that participant to execute point-to-point movements, in various modes of robot operation and under various conditions. The adaptability of the GENTLE/A system was clearly evident from the durations recorded. The regression results showed that the participants required lower execution times with the help from a real object when compared to just a virtual object. The 'reaching away' movements were longer to execute when compared to the 'returning towards' movements irrespective of the

  16. Intelligence for Human-Assistant Planetary Surface Robots

    Science.gov (United States)

    Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.

    2006-01-01

    The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.

  17. Effects of eye contact and iconic gestures on message retention in human-robot interaction

    NARCIS (Netherlands)

    Dijk, van E.T.; Torta, E.; Cuijpers, R.H.

    2013-01-01

    The effects of iconic gestures and eye contact on message retention in human-robot interaction were investigated in a series of experiments. A humanoid robot gave short verbal messages to participants, accompanied either by iconic gestures or no gestures while making eye contact with the participant

  18. Natural Tasking of Robots Based on Human Interaction Cues (CD-ROM)

    National Research Council Canada - National Science Library

    Brooks, Rodney A

    2005-01-01

    ...: 1 CD-ROM; 4 3/4 in.; 207 MB. ABSTRACT: We proposed developing the perceptual and intellectual abilities of robots so that in the field, war-fighters can interact with them in the same natural ways as they do with their human cohorts...

  19. Hierarchical Motion Control for a Team of Humanoid Soccer Robots

    Directory of Open Access Journals (Sweden)

    Seung-Joon Yi

    2016-02-01

    Full Text Available Robot soccer has become an effective benchmarking problem for robotics research as it requires many aspects of robotics including perception, self localization, motion planning and distributed coordination to work in uncertain and adversarial environments. Especially with humanoid robots that lack inherent stability, a capable and robust motion controller is crucial for generating walking and kicking motions without losing balance. In this paper, we describe the details of a motion controller to control a team of humanoid soccer robots, which consists of a hierarchy of controllers with different time frames and abstraction levels. A low level controller governs the real time control of each joint angle, either using target joint angles or target endpoint transforms. A mid-level controller handles bipedal locomotion and balancing of the robot. A high level controller decides the long term behavior of the robot, and finally the team level controller coordinates the behavior of a group of robots by means of asynchronous communication between the robots. The suggested motion system has been successfully used by many humanoid robot teams at the RoboCup international robot soccer competitions, which has awarded us five successful championships in a row.

  20. Physical Human Robot Interaction for a Wall Mounting Robot - External Force Estimation

    DEFF Research Database (Denmark)

    Alonso García, Alejandro; Villarmarzo Arruñada, Noelia; Pedersen, Rasmus

    2018-01-01

    The use of collaborative robots enhances human capabilities, leading to better working conditions and increased productivity. In building construction, such robots are needed, among other tasks, to install large glass panels, where the robot takes care of the heavy lifting part of the job while...

  1. Interactions between Humans and Robots

    DEFF Research Database (Denmark)

    Vlachos, Evgenios; Schärfe, Henrik

    2013-01-01

    ), and explains the relationships and dependencies that exist between them. The four main factors that define the properties of a robot, and therefore the interaction, are distributed in two dimensions: (1) Intelligence (Control - Autonomy), and (2) Perspective (Tool - Medium). Based on these factors, we...

  2. Anthropomorphic Robot Design and User Interaction Associated with Motion

    Science.gov (United States)

    Ellis, Stephen R.

    2016-01-01

    Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over

  3. Human-like robots for space and hazardous environments

    Science.gov (United States)

    1994-01-01

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  4. A Human-Robot Co-Manipulation Approach Based on Human Sensorimotor Information.

    Science.gov (United States)

    Peternel, Luka; Tsagarakis, Nikos; Ajoudani, Arash

    2017-07-01

    This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human-robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human-robot interaction performance and deliver appropriate level of assistance to the human operator.

  5. The Age of Human-Robot Collaboration: Deep Sea Exploration

    KAUST Repository

    Khatib, Oussama

    2018-01-18

    The promise of oceanic discovery has intrigued scientists and explorers for centuries, whether to study underwater ecology and climate change, or to uncover natural resources and historic secrets buried deep at archaeological sites. Reaching these depth is imperative since factors such as pollution and deep-sea trawling increasingly threaten ecology and archaeological sites. These needs demand a system deploying human-level expertise at the depths, and yet remotely operated vehicles (ROVs) are inadequate for the task. To meet the challenge of dexterous operation at oceanic depths, in collaboration with KAUSTメs Red Sea Research Center and MEKA Robotics, Oussama Khatib and the team developed Ocean One, a bimanual humanoid robot that brings immediate and intuitive haptic interaction to oceanic environments. Introducing Ocean One, the haptic robotic avatar During this lecture, Oussama Khatib will talk about how teaming with the French Ministry of Cultureメs Underwater Archaeology Research Department, they deployed Ocean One in an expedition in the Mediterranean to Louis XIVメs flagship Lune, lying off the coast of Toulon at ninety-one meters. In the spring of 2016, Ocean One became the first robotic avatar to embody a humanメs presence at the seabed. Ocean Oneメs journey in the Mediterranean marks a new level of marine exploration: Much as past technological innovations have impacted society, Ocean Oneメs ability to distance humans physically from dangerous and unreachable work spaces while connecting their skills, intuition, and experience to the task promises to fundamentally alter remote work. Robotic avatars will search for and acquire materials, support equipment, build infrastructure, and perform disaster prevention and recovery operations - be it deep in oceans and mines, at mountain tops, or in space.

  6. Ambulatory movements, team dynamics and interactions during robot-assisted surgery.

    Science.gov (United States)

    Ahmad, Nabeeha; Hussein, Ahmed A; Cavuoto, Lora; Sharif, Mohamed; Allers, Jenna C; Hinata, Nobuyuki; Ahmad, Basel; Kozlowski, Justen D; Hashmi, Zishan; Bisantz, Ann; Guru, Khurshid A

    2016-07-01

    To analyse ambulatory movements and team dynamics during robot-assisted surgery (RAS), and to investigate whether congestion of the physical space associated with robotic technology led to workflow challenges or predisposed to errors and adverse events. With institutional review board approval, we retrospectively reviewed 10 recorded robot-assisted radical prostatectomies in a single operating room (OR). The OR was divided into eight zones, and all movements were tracked and described in terms of start and end zones, duration, personnel and purpose. Movements were further classified into avoidable (can be eliminated/improved) and unavoidable (necessary for completion of the procedure). The mean operating time was 166 min, of which ambulation constituted 27 min (16%). A total of 2 896 ambulatory movements were identified (mean: 290 ambulatory movements/procedure). Most of the movements were procedure-related (31%), and were performed by the circulating nurse. We identified 11 main pathways in the OR; the heaviest traffic was between the circulating nurse zone, transit zone and supply-1 zone. A total of 50% of ambulatory movements were found to be avoidable. More than half of the movements during RAS can be eliminated with an improved OR setting. More studies are needed to design an evidence-based OR layout that enhances access, workflow and patient safety. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  7. Affective and behavioral responses to robot-initiated social touch : Towards understanding the opportunities and limitations of physical contact in human-robot interaction

    NARCIS (Netherlands)

    Willemse, C.J.A.M.; Toet, A.; Erp, J.B.F. van

    2017-01-01

    Social touch forms an important aspect of the human non-verbal communication repertoire, but is often overlooked in human–robot interaction. In this study, we investigated whether robot-initiated touches can induce physiological, emotional, and behavioral responses similar to those reported for

  8. Learning compliant manipulation through kinesthetic and tactile human-robot interaction.

    Science.gov (United States)

    Kronander, Klas; Billard, Aude

    2014-01-01

    Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective.

  9. You Can Leave Your Head On: Attention Management and Turn-Taking in Multi-party Interaction with a Virtual Human/Robot Duo

    NARCIS (Netherlands)

    Linssen, Jeroen; Berkhoff, Meike; Bode, Max; Rens, Eduard; Theune, Mariet; Wiltenburg, Daan; Beskow, Jonas; Peters, Christopher; Castellano, Ginevra; O'Sullivan, Carol; Leite, Iolanda; Kopp, Stefan

    In two small studies, we investigated how a virtual human/ robot duo can complement each other in joint interaction with one or more users. The robot takes care of turn management while the virtual human draws attention to the robot. Our results show that having the virtual human address the robot,

  10. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  11. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  12. Socially Impaired Robots: Human Social Disorders and Robots' Socio-Emotional Intelligence

    OpenAIRE

    Vitale, Jonathan; Williams, Mary-Anne; Johnston, Benjamin

    2016-01-01

    Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots' future capability requirements. We provide evidence supporting...

  13. An Integrated Framework for Human-Robot Collaborative Manipulation.

    Science.gov (United States)

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  14. Human-Agent Teaming for Multi-Robot Control: A Literature Review

    Science.gov (United States)

    2013-02-01

    advent of the Goggle driverless car , autonomous farm equipment, and unmanned commercial aircraft (Mosher, 2012). The inexorable trend towards...because a robot cannot be automated to navigate in difficult terrain. However, this high ratio will not be sustainable if large numbers of autonomous ...Parasuraman et al., 2007). 3.5 RoboLeader Past research indicates that autonomous cooperation between robots can improve the performance of the human

  15. The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates

    Science.gov (United States)

    Ososky, Scott; Schuster, David; Jentsch, Florian; Fiore, Stephen; Shumaker, Randall; Lebiere, Christian; Kurup, Unmesh; Oh, Jean; Stentz, Anthony

    2012-06-01

    Current ground robots are largely employed via tele-operation and provide their operators with useful tools to extend reach, improve sensing, and avoid dangers. To move from robots that are useful as tools to truly synergistic human-robot teaming, however, will require not only greater technical capabilities among robots, but also a better understanding of the ways in which the principles of teamwork can be applied from exclusively human teams to mixed teams of humans and robots. In this respect, a core characteristic that enables successful human teams to coordinate shared tasks is their ability to create, maintain, and act on a shared understanding of the world and the roles of the team and its members in it. The team performance literature clearly points towards two important cornerstones for shared understanding of team members: mental models and situation awareness. These constructs have been investigated as products of teams as well; amongst teams, they are shared mental models and shared situation awareness. Consequently, we are studying how these two constructs can be measured and instantiated in human-robot teams. In this paper, we report results from three related efforts that are investigating process and performance outcomes for human robot teams. Our investigations include: (a) how human mental models of tasks and teams change whether a teammate is human, a service animal, or an advanced automated system; (b) how computer modeling can lead to mental models being instantiated and used in robots; (c) how we can simulate the interactions between human and future robotic teammates on the basis of changes in shared mental models and situation assessment.

  16. Robot - a member of (re)habilitation team

    OpenAIRE

    Komazec Zoran; Lemajić-Komazec Slobodanka; Golubović Špela; Mikov Aleksandra; Krasnik Rastislava

    2012-01-01

    Introduction. The rehabilitation process involves a whole team of experts who participate in it over a long period of time. Development of Robotics and its Application in Medicine. The Intensive development of science and technology has made it possible to design a number of robots which are used for therapeutic purposes and participate in the rehabilitation process. Robotics in Medical Rehabilitation. During the long history of technological development of mankind, a number of conceptu...

  17. Scalable Task Assignment for Heterogeneous Multi-Robot Teams

    Directory of Open Access Journals (Sweden)

    Paula García

    2013-02-01

    Full Text Available This work deals with the development of a dynamic task assignment strategy for heterogeneous multi-robot teams in typical real world scenarios. The strategy must be efficiently scalable to support problems of increasing complexity with minimum designer intervention. To this end, we have selected a very simple auction-based strategy, which has been implemented and analysed in a multi-robot cleaning problem that requires strong coordination and dynamic complex subtask organization. We will show that the selection of a simple auction strategy provides a linear computational cost increase with the number of robots that make up the team and allows the solving of highly complex assignment problems in dynamic conditions by means of a hierarchical sub-auction policy. To coordinate and control the team, a layered behaviour-based architecture has been applied that allows the reusing of the auction-based strategy to achieve different coordination levels.

  18. Speech-Based Human and Service Robot Interaction: An Application for Mexican Dysarthric People

    Directory of Open Access Journals (Sweden)

    Santiago Omar Caballero Morales

    2013-01-01

    Full Text Available Dysarthria is a motor speech disorder due to weakness or poor coordination of the speech muscles. This condition can be caused by a stroke, traumatic brain injury, or by a degenerative neurological disease. Commonly, people with this disorder also have muscular dystrophy, which restricts their use of switches or keyboards for communication or control of assistive devices (i.e., an electric wheelchair or a service robot. In this case, speech recognition is an attractive alternative for interaction and control of service robots, despite the difficulty of achieving robust recognition performance. In this paper we present a speech recognition system for human and service robot interaction for Mexican Spanish dysarthric speakers. The core of the system consisted of a Speaker Adaptive (SA recognition system trained with normal-speech. Features such as on-line control of the language model perplexity and the adding of vocabulary, contribute to high recognition performance. Others, such as assessment and text-to-speech (TTS synthesis, contribute to a more complete interaction with a service robot. Live tests were performed with two mild dysarthric speakers, achieving recognition accuracies of 90–95% for spontaneous speech and 95–100% of accomplished simulated service robot tasks.

  19. A Case-Study for Life-Long Learning and Adaptation in Cooperative Robot Teams

    International Nuclear Information System (INIS)

    Parker, L.E.

    1999-01-01

    While considerable progress has been made in recent years toward the development of multi-robot teams, much work remains to be done before these teams are used widely in real-world applications. Two particular needs toward this end are the development of mechanisms that enable robot teams to generate cooperative behaviors on their own, and the development of techniques that allow these teams to autonomously adapt their behavior over time as the environment or the robot team changes. This paper proposes the use of the Cooperative Multi-Robot Observation of Multiple Moving Targets (CMOMMT) application as a rich domain for studying the issues of multi-robot learning and adaptation. After discussing the need for learning and adaptation in multi-robot teams, this paper describes the CMOMMT application and its relevance to multi-robot learning. We discuss the results of the previously- developed, hand-generated algorithm for CMOMMT and the potential for learning that was discovered from the hand-generated approach. We then describe the early work that has been done (by us and others) to generate multi- robot learning techniques for the CMOMMT application, as well as our ongoing research to develop approaches that give performance as good, or better, than the hand-generated approach. The ultimate goal of this research is to develop techniques for multi-robot learning and adaptation in the CMOMMT application domain that will generalize to cooperative robot applications in other domains, thus making the practical use of multi-robot teams in a wide variety of real-world applications much closer to reality

  20. Interactions With Robots: The Truths We Reveal About Ourselves.

    Science.gov (United States)

    Broadbent, Elizabeth

    2017-01-03

    In movies, robots are often extremely humanlike. Although these robots are not yet reality, robots are currently being used in healthcare, education, and business. Robots provide benefits such as relieving loneliness and enabling communication. Engineers are trying to build robots that look and behave like humans and thus need comprehensive knowledge not only of technology but also of human cognition, emotion, and behavior. This need is driving engineers to study human behavior toward other humans and toward robots, leading to greater understanding of how humans think, feel, and behave in these contexts, including our tendencies for mindless social behaviors, anthropomorphism, uncanny feelings toward robots, and the formation of emotional attachments. However, in considering the increased use of robots, many people have concerns about deception, privacy, job loss, safety, and the loss of human relationships. Human-robot interaction is a fascinating field and one in which psychologists have much to contribute, both to the development of robots and to the study of human behavior.

  1. Developing new behavior strategies of robot soccer team SjF TUKE Robotics

    Directory of Open Access Journals (Sweden)

    Mikuláš Hajduk

    2016-09-01

    Full Text Available There are too many types of robotic soccer approaches at present. SjF TUKE Robotics, who won robot soccer world tournament for year 2010 in category MiroSot, is a team with multiagent system approach. They have one main agent (master and five agent players, represented by robots. There is a point of view, in the article, for code programmer how to create new behavior strategies by creating a new code for master. There is a methodology how to prepare and create it following some rules.

  2. Robotics Team Lights Up New Year's Eve

    Science.gov (United States)

    LeBlanc, Cheryl

    2011-01-01

    A robotics team from Muncie, Indiana--the PhyXTGears--is made up of high school students from throughout Delaware County. The group formed as part of the FIRST Robotics program (For Inspiration and Recognition of Science and Technology), an international program founded by inventor Dean Kamen in which students work with professional engineers and…

  3. The Relationship between Robot's Nonverbal Behaviour and Human's Likability Based on Human's Personality.

    Science.gov (United States)

    Thepsoonthorn, Chidchanok; Ogawa, Ken-Ichiro; Miyake, Yoshihiro

    2018-05-30

    At current state, although robotics technology has been immensely developed, the uncertainty to completely engage in human-robot interaction is still growing among people. Many current studies then started to concern about human factors that might influence human's likability like human's personality, and found that compatibility between human's and robot's personality (expressions of personality characteristics) can enhance human's likability. However, it is still unclear whether specific means and strategy of robot's nonverbal behaviours enhances likability from human with different personality traits and whether there is a relationship between robot's nonverbal behaviours and human's likability based on human's personality. In this study, we investigated and focused on the interaction via gaze and head nodding behaviours (mutual gaze convergence and head nodding synchrony) between introvert/extravert participants and robot in two communication strategies (Backchanneling and Turn-taking). Our findings reveal that the introvert participants are positively affected by backchanneling in robot's head nodding behaviour, which results in substantial head nodding synchrony whereas the extravert participants are positively influenced by turn-taking in gaze behaviour, which leads to significant mutual gaze convergence. This study demonstrates that there is a relationship between robot's nonverbal behaviour and human's likability based on human's personality.

  4. Sustaining Emotional Communication when Interacting with an Android Robot

    DEFF Research Database (Denmark)

    Vlachos, Evgenios

    The more human-like a robot appears and acts, the more users will have the belief of communicating with a human partner rather than with an artificial entity. However, current robotic technology displays limitations on the design of the facial interface and on the design of believable Human...... of an android related to its actions, perception and intelligence, or failure to identify which robot type is qualified to perform a specific task, might lead to disruption of HRI. This study is concerned with the problem of sustaining emotional communication when interacting with an android social robot......-ended evaluation method pertaining to the interpretation of Android facial expressions (g) a study on how users’ perception and attitude can change after direct interaction with a robot, (h) a study on how androids can maintain the focus of attention during short-term dyadic interactions, and (i) a state...

  5. Adaptive Human-Aware Robot Navigation in Close Proximity to Humans

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2011-01-01

    For robots to be able coexist with people in future everyday human environments, they must be able to act in a safe, natural and comfortable way. This work addresses the motion of a mobile robot in an environment, where humans potentially want to interact with it. The designed system consists...... system that uses a potential field to derive motion that respects the personʹs social zones and perceived interest in interaction. The operation of the system is evaluated in a controlled scenario in an open hall environment. It is demonstrated that the robot is able to learn to estimate if a person...... wishes to interact, and that the system is capable of adapting to changing behaviours of the humans in the environment....

  6. College of Engineering team to build battlefield robots for 2010 competition

    OpenAIRE

    Mackay, Steven D.

    2010-01-01

    The roving, walking robotic soldiers of the "Terminator" films are becoming less sci-fi, and more certain future every day. Now, a team of robotics researchers from the Virginia Tech College of Engineering will build a team of fully autonomous cooperative battle-ready robots as part of a 2010 international war games challenge that could spur real-life battle bots.

  7. COMPARISON OF CLASSICAL AND INTERACTIVE MULTI-ROBOT EXPLORATION STRATEGIES IN POPULATED ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Nassim Kalde

    2015-06-01

    Full Text Available Multi-robot exploration consists in coordinating robots for mapping an unknown environment. It raises several issues concerning task allocation, robot control, path planning and communication. We study exploration in populated environments, in which pedestrian flows can severely impact performances. However, humans have adaptive skills for taking advantage of these flows while moving. Therefore, in order to exploit these human abilities, we propose a novel exploration strategy that explicitly allows for human-robot interactions. Our model for exploration in populated environments combines the classical frontier-based strategy with our interactive approach. We implement interactions where robots can locally choose a human guide to follow and define a parametric heuristic to balance interaction and frontier assignments. Finally, we evaluate to which extent human presence impacts our exploration model in terms of coverage ratio, travelled distance and elapsed time to completion.

  8. Views from Within a Narrative: Evaluating Long-Term Human-Robot Interaction in a Naturalistic Environment Using Open-Ended Scenarios.

    Science.gov (United States)

    Syrdal, Dag Sverre; Dautenhahn, Kerstin; Koay, Kheng Lee; Ho, Wan Ching

    2014-01-01

    This article describes the prototyping of human-robot interactions in the University of Hertfordshire (UH) Robot House. Twelve participants took part in a long-term study in which they interacted with robots in the UH Robot House once a week for a period of 10 weeks. A prototyping method using the narrative framing technique allowed participants to engage with the robots in episodic interactions that were framed using narrative to convey the impression of a continuous long-term interaction. The goal was to examine how participants responded to the scenarios and the robots as well as specific robot behaviours, such as agent migration and expressive behaviours. Evaluation of the robots and the scenarios were elicited using several measures, including the standardised System Usability Scale, an ad hoc Scenario Acceptance Scale, as well as single-item Likert scales, open-ended questionnaire items and a debriefing interview. Results suggest that participants felt that the use of this prototyping technique allowed them insight into the use of the robot, and that they accepted the use of the robot within the scenario.

  9. Human-robot interaction: kinematics and muscle activity inside a powered compliant knee exoskeleton.

    Science.gov (United States)

    Knaepen, Kristel; Beyl, Pieter; Duerinck, Saartje; Hagman, Friso; Lefeber, Dirk; Meeusen, Romain

    2014-11-01

    Until today it is not entirely clear how humans interact with automated gait rehabilitation devices and how we can, based on that interaction, maximize the effectiveness of these exoskeletons. The goal of this study was to gain knowledge on the human-robot interaction, in terms of kinematics and muscle activity, between a healthy human motor system and a powered knee exoskeleton (i.e., KNEXO). Therefore, temporal and spatial gait parameters, human joint kinematics, exoskeleton kinetics and muscle activity during four different walking trials in 10 healthy male subjects were studied. Healthy subjects can walk with KNEXO in patient-in-charge mode with some slight constraints in kinematics and muscle activity primarily due to inertia of the device. Yet, during robot-in-charge walking the muscular constraints are reversed by adding positive power to the leg swing, compensating in part this inertia. Next to that, KNEXO accurately records and replays the right knee kinematics meaning that subject-specific trajectories can be implemented as a target trajectory during assisted walking. No significant differences in the human response to the interaction with KNEXO in low and high compliant assistance could be pointed out. This is in contradiction with our hypothesis that muscle activity would decrease with increasing assistance. It seems that the differences between the parameter settings of low and high compliant control might not be sufficient to observe clear effects in healthy subjects. Moreover, we should take into account that KNEXO is a unilateral, 1 degree-of-freedom device.

  10. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot

    Directory of Open Access Journals (Sweden)

    Nicole Mirnig

    2017-05-01

    Full Text Available We conducted a user study for which we purposefully programmed faulty behavior into a robot’s routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human–robot interaction. In contrast to our previous work, where we studied video material in which robot errors occurred unintentionally, in the herein reported user study, we purposefully elicited robot errors to further explore the human interaction partners’ social signals following a robot error. Our participants interacted with a human-like NAO, and the robot either performed faulty or free from error. First, the robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. After the interaction, we asked the participants to rate the robot’s anthropomorphism, likability, and perceived intelligence. We also interviewed the participants on their opinion about the interaction. Additionally, we video-coded the social signals the participants showed during their interaction with the robot as well as the answers they provided the robot with. Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly. We did not find significant differences in people’s ratings of the robot’s anthropomorphism and perceived intelligence. The qualitative data confirmed the questionnaire results in showing that although the participants recognized the robot’s mistakes, they did not necessarily reject the erroneous robot. The annotations of the video data further showed that gaze shifts (e.g., from an object to the robot or vice versa and laughter are typical reactions to unexpected robot behavior

  11. Spatiotemporal Aspects of Engagement during Dialogic Storytelling Child–Robot Interaction

    Directory of Open Access Journals (Sweden)

    Scott Heath

    2017-06-01

    Full Text Available The success of robotic agents in close proximity of humans depends on their capacity to engage in social interactions and maintain these interactions over periods of time that are suitable for learning. A critical requirement is the ability to modify the behavior of the robot contingently to the attentional and social cues signaled by the human. A benchmark challenge for an engaging social robot is that of storytelling. In this paper, we present an exploratory study to investigate dialogic storytelling—storytelling with contingent responses—using a child-friendly robot. The aim of the study was to develop an engaging storytelling robot and to develop metrics for evaluating engagement. Ten children listened to an illustrated story told by a social robot during a science fair. The responses of the robot were adapted during the interaction based on the children’s engagement and touches of the pictures displayed by the robot on a tablet embedded in its torso. During the interaction the robot responded contingently to the child, but only when the robot invited the child to interact. We describe the robot architecture used to implement dialogic storytelling and evaluate the quality of human–robot interaction based on temporal (patterns of touch, touch duration and spatial (motions in the space surrounding the robot metrics. We introduce a novel visualization that emphasizes the temporal dynamics of the interaction and analyze the motions of the children in the space surrounding the robot. The study demonstrates that the interaction through invited contingent responses succeeded in engaging children, although the robot missed some opportunities for contingent interaction and the children had to adapt to the task. We conclude that (i the consideration of both temporal and spatial attributes is fundamental for establishing metrics to estimate levels of engagement in real-time, (ii metrics for engagement are sensitive to both the group and

  12. Toward understanding social cues and signals in human–robot interaction: effects of robot gaze and proxemic behavior

    Science.gov (United States)

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot AvaTM mobile robotics platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals. PMID:24348434

  13. Mobile Robots in Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    intelligent mobile robotic devices capable of being a more natural and sociable actor in a human environment. More specific the emphasis is on safe and natural motion and navigation issues. First part of the work focus on developing a robotic system, which estimates human interest in interacting......, lawn mowers, toy pets, or as assisting technologies for care giving. If we want robots to be an even larger and more integrated part of our every- day environments, they need to become more intelligent, and behave safe and natural to the humans in the environment. This thesis deals with making...... as being able to navigate safely around one person, the robots must also be able to navigate in environments with more people. This can be environments such as pedestrian streets, hospital corridors, train stations or airports. The developed human-aware navigation strategy is enhanced to formulate...

  14. Mobile Robotic Teams Applied to Precision Agriculture

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Matthew Oley; Kinoshita, Robert Arthur; Mckay, Mark D; Willis, Walter David; Gunderson, R.W.; Flann, N.S.

    1999-04-01

    The Idaho National Engineering and Environmental Laboratory (INEEL) and Utah State University’s Center for Self-Organizing and Intelligent Systems (CSOIS) have developed a team of autonomous robotic vehicles applicable to precision agriculture. A unique technique has been developed to plan, coordinate, and optimize missions in large structured environments for these autonomous vehicles in realtime. Two generic tasks are supported: 1) Driving to a precise location, and 2) Sweeping an area while activating on-board equipment. Sensor data and task achievement data is shared among the vehicles enabling them to cooperatively adapt to changing environmental, vehicle, and task conditions. This paper discusses the development of the autonomous robotic team, details of the mission-planning algorithm, and successful field demonstrations at the INEEL.

  15. Mobile Robotic Teams Applied to Precision Agriculture

    Energy Technology Data Exchange (ETDEWEB)

    M.D. McKay; M.O. Anderson; N.S. Flann (Utah State University); R.A. Kinoshita; R.W. Gunderson; W.D. Willis (INEEL)

    1999-04-01

    The Idaho National Engineering and Environmental Laboratory (INEEL) and Utah State University�s Center for Self-Organizing and Intelligent Systems (CSOIS) have developed a team of autonomous robotic vehicles applicable to precision agriculture. A unique technique has been developed to plan, coordinate, and optimize missions in large structured environments for these autonomous vehicles in real-time. Two generic tasks are supported: 1) Driving to a precise location, and 2) Sweeping an area while activating on-board equipment. Sensor data and task achievement data is shared among the vehicles enabling them to cooperatively adapt to changing environmental, vehicle, and task conditions. This paper discusses the development of the autonomous robotic team, details of the mission-planning algorithm, and successful field demonstrations at the INEEL.

  16. Vocal Interactivity in-and-between Humans, Animals and Robots

    Directory of Open Access Journals (Sweden)

    Roger K Moore

    2016-10-01

    Full Text Available Almost all animals exploit vocal signals for a range of ecologically-motivated purposes: detecting predators prey and marking territory, expressing emotions, establishing social relations and sharing information. Whether it is a bird raising an alarm, a whale calling to potential partners,a dog responding to human commands, a parent reading a story with a child, or a business-person accessing stock prices using emph{Siri}, vocalisation provides a valuable communication channel through which behaviour may be coordinated and controlled, and information may be distributed and acquired.Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human-machine interaction. Opportunities for cross-fertilisation between these fields abound; for example, using artificial cognitive agents to investigate contemporary theories of language grounding, using machine learning to analyse different habitats or adding vocal expressivity to the next generation of language-enabled autonomous social agents. However, much of the research is conducted within well-defined disciplinary boundaries, and many fundamental issues remain. This paper attempts to redress the balance by presenting a comparative review of vocal interaction within-and-between humans, animals and artificial agents (such as robots, and it identifies a rich set of open research questions that may benefit from an inter-disciplinary analysis.

  17. Contact Estimation in Robot Interaction

    Directory of Open Access Journals (Sweden)

    Filippo D'Ippolito

    2014-07-01

    Full Text Available In the paper, safety issues are examined in a scenario in which a robot manipulator and a human perform the same task in the same workspace. During the task execution, the human should be able to physically interact with the robot, and in this case an estimation algorithm for both interaction forces and a contact point is proposed in order to guarantee safety conditions. The method, starting from residual joint torque estimation, allows both direct and adaptive computation of the contact point and force, based on a principle of equivalence of the contact forces. At the same time, all the unintended contacts must be avoided, and a suitable post-collision strategy is considered to move the robot away from the collision area or else to reduce impact effects. Proper experimental tests have demonstrated the applicability in practice of both the post-impact strategy and the estimation algorithms; furthermore, experiments demonstrate the different behaviour resulting from the adaptation of the contact point as opposed to direct calculation.

  18. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  19. Human - Robot Proximity

    DEFF Research Database (Denmark)

    Nickelsen, Niels Christian Mossfeldt

    The media and political/managerial levels focus on the opportunities to re-perform Denmark through digitization. Feeding assistive robotics is a welfare technology, relevant to citizens with low or no function in their arms. Despite national dissemination strategies, it proves difficult to recruit...... the study that took place as multi-sited ethnography at different locations in Denmark and Sweden. Based on desk research, observation of meals and interviews I examine socio-technological imaginaries and their practical implications. Human - robotics interaction demands engagement and understanding...

  20. Modelling Engagement in Multi-Party Conversations : Data-Driven Approaches to Understanding Human-Human Communication Patterns for Use in Human-Robot Interactions

    OpenAIRE

    Oertel, Catharine

    2016-01-01

    The aim of this thesis is to study human-human interaction in order to provide virtual agents and robots with the capability to engage into multi-party-conversations in a human-like-manner. The focus lies with the modelling of conversational dynamics and the appropriate realization of multi-modal feedback behaviour. For such an undertaking, it is important to understand how human-human communication unfolds in varying contexts and constellations over time. To this end, multi-modal human-human...

  1. Sharing skills: using augmented reality for human-robot collaboration

    Science.gov (United States)

    Giesler, Bjorn; Steinhaus, Peter; Walther, Marcus; Dillmann, Ruediger

    2004-05-01

    Both stationary 'industrial' and autonomous mobile robots nowadays pervade many workplaces, but human-friendly interaction with them is still very much an experimental subject. One of the reasons for this is that computer and robotic systems are very bad at performing certain tasks well and robust. A prime example is classification of sensor readings: Which part of a 3D depth image is the cup, which the saucer, which the table? These are tasks that humans excel at. To alleviate this problem, we propose a team approah, wherein the robot records sensor data and uses an Augmented-Reality (AR) system to present the data to the user directly in the 3D environment. The user can then perform classification decisions directly on the data by pointing, gestures and speech commands. After the classification has been performed by the user, the robot takes the classified data and matches it to its environment model. As a demonstration of this approach, we present an initial system for creating objects on-the-fly in the environment model. A rotating laser scanner is used to capture a 3D snapshot of the environment. This snapshot is presented to the user as an overlay over his view of the scene. The user classifies unknown objects by pointing at them. The system segments the snapshot according to the user's indications and presents the results of segmentation back to the user, who can then inspect, correct and enhance them interactively. After a satisfying result has been reached, the laser-scanner can take more snapshots from other angles and use the previous segmentation hints to construct a 3D model of the object.

  2. Cognitive Coordination for Cooperative Multi-Robot Teamwork

    NARCIS (Netherlands)

    Wei, C.

    2015-01-01

    Multi-robot teams have potential advantages over a single robot. Robots in a team can serve different functionalities, so a team of robots can be more efficient, robust and reliable than a single robot. In this dissertation, we are in particular interested in human level intelligent multi-robot

  3. Human guidance of mobile robots in complex 3D environments using smart glasses

    Science.gov (United States)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  4. Robotics and nuclear power. Report by the Technology Transfer Robotics Task Team

    International Nuclear Information System (INIS)

    1985-06-01

    A task team was formed at the request of the Department of Energy to evaluate and assess technology development needed for advanced robotics in the nuclear industry. The mission of these technologies is to provide the nuclear industry with the support for the application of advanced robotics to reduce nuclear power generating costs and enhance the safety of the personnel in the industry. The investigation included robotic and teleoperated systems. A robotic system is defined as a reprogrammable, multifunctional manipulator designed to move materials, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks. A teleoperated system includes an operator who remotely controls the system by direct viewing or through a vision system

  5. Social Moments: A Perspective on Interaction for Social Robotics

    Directory of Open Access Journals (Sweden)

    Gautier Durantin

    2017-06-01

    Full Text Available During a social interaction, events that happen at different timescales can indicate social meanings. In order to socially engage with humans, robots will need to be able to comprehend and manipulate the social meanings that are associated with these events. We define social moments as events that occur within a social interaction and which can signify a pragmatic or semantic meaning. A challenge for social robots is recognizing social moments that occur on short timescales, which can be on the order of 102 ms. In this perspective, we propose that understanding the range and roles of social moments in a social interaction and implementing social micro-abilities—the abilities required to engage in a timely manner through social moments—is a key challenge for the field of human robot interaction (HRI to enable effective social interactions and social robots. In particular, it is an open question how social moments can acquire their associated meanings. Practically, the implementation of these social micro-abilities presents engineering challenges for the fields of HRI and social robotics, including performing processing of sensors and using actuators to meet fast timescales. We present a key challenge of social moments as integration of social stimuli across multiple timescales and modalities. We present the neural basis for human comprehension of social moments and review current literature related to social moments and social micro-abilities. We discuss the requirements for social micro-abilities, how these abilities can enable more natural social robots, and how to address the engineering challenges associated with social moments.

  6. Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Bo Zhou

    2017-11-01

    Full Text Available In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm 2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93 . 3 % accuracy from a known group of participants, and 89 . 1 % from strangers.

  7. Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction.

    Science.gov (United States)

    Zhou, Bo; Altamirano, Carlos Andres Velez; Zurian, Heber Cruz; Atefi, Seyed Reza; Billing, Erik; Martinez, Fernando Seoane; Lukowicz, Paul

    2017-11-09

    In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm 2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments' contribution. The best performing feature-classifier combination can recognize the gestures with a 93 . 3 % accuracy from a known group of participants, and 89 . 1 % from strangers.

  8. Eyeblink Synchrony in Multimodal Human-Android Interaction.

    Science.gov (United States)

    Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro

    2016-12-23

    As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human's attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners' eyeblinks were entrained to android speakers' eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android's hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions.

  9. Digital twins of human robot collaboration in a production setting

    DEFF Research Database (Denmark)

    Malik, Ali Ahmad; Bilberg, Arne

    2018-01-01

    This paper aims to present a digital twin framework to support the design, build and control of human-machine cooperation. In this study, computer simulations are used to develop a digital counterpart of a human-robot collaborative work environment for assembly work. The digital counterpart remains...... updated during the life cycle of the production system by continuously mirroring the physical system for quick and safe embed for continuous improvements. The case of a manufacturing company with human-robot work teams is presented for developing and validating the digital twin framework....

  10. Ethorobotics: A New Approach to Human-Robot Relationship

    Directory of Open Access Journals (Sweden)

    Ádám Miklósi

    2017-06-01

    Full Text Available Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human behaviors: ecology and ethology. We show how the paradox of the so called “uncanny valley hypothesis” can be solved by applying the “niche” concept to social robots, and relying on the natural behavior of humans. Instead of striving to build human-like social robots, engineers should construct robots that are able to maximize their performance in their niche (being optimal for some specific functions, and if they are endowed with appropriate form of social competence then humans will eventually interact with them independent of their embodiment. This new discipline, which we call ethorobotics, could change social robotics, giving a boost to new technical approaches and applications.

  11. Ethorobotics: A New Approach to Human-Robot Relationship

    Science.gov (United States)

    Miklósi, Ádám; Korondi, Péter; Matellán, Vicente; Gácsi, Márta

    2017-01-01

    Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human behaviors: ecology and ethology. We show how the paradox of the so called “uncanny valley hypothesis” can be solved by applying the “niche” concept to social robots, and relying on the natural behavior of humans. Instead of striving to build human-like social robots, engineers should construct robots that are able to maximize their performance in their niche (being optimal for some specific functions), and if they are endowed with appropriate form of social competence then humans will eventually interact with them independent of their embodiment. This new discipline, which we call ethorobotics, could change social robotics, giving a boost to new technical approaches and applications. PMID:28649213

  12. I Show You How I Like You: Emotional Human-Robot Interaction through Facial Expression and Tactile Stimulation

    DEFF Research Database (Denmark)

    Fredslund, Jakob; Cañamero, Lola D.

    2001-01-01

    We report work on a LEGO robot capable of displaying several emo- tional expressions in response to physical contact. Our motivation has been to explore believable emotional exchanges to achieve plausible interaction with a simple robot. We have worked toward this goal in two ways. First......, acknowledging the importance of physical manipulation in children's inter- actions, interaction with the robot is through tactile stimulation; the various kinds of stimulation that can elicit the robot's emotions are grounded in a model of emotion activation based on different stimulation patterns. Sec- ond......, emotional states need to be clearly conveyed. We have drawn inspira- tion from theories of human basic emotions with associated universal facial expressions, which we have implemented in a caricaturized face. We have conducted experiments on both children and adults to assess the recogniz- ability...

  13. An Augmented Discrete-Time Approach for Human-Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Peidong Liang

    2016-01-01

    Full Text Available Human-robot collaboration (HRC is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios.

  14. Student teams maneuver robots in qualifying match at regional robotic competition at KSC

    Science.gov (United States)

    1999-01-01

    All four robots, maneuvered by student teams behind protective walls, converge on a corner of the playing field during qualifying matches of the 1999 Southeastern Regional robotic competition at Kennedy Space Center Visitor Complex . Thirty schools from around the country have converged at KSC for the event that pits gladiator robots against each other in an athletic-style competition. The robots have to retrieve pillow- like disks from the floor, as well as climb onto the platform (with flags) and raise the cache of pillows to a height of eight feet. KSC is hosting the event being sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.

  15. Accompany: Acceptable robotiCs COMPanions for AgeiNG Years - Multidimensional aspects of human-system interactions

    OpenAIRE

    Amirabdollahian F.; Op Den Akker R.; Bedaf S.; Bormann R.; Draper H.; Evers V.; Gelderblom G.J.; Ruiz C.G.; Hewson D.; Hu N.

    2013-01-01

    With changes in life expectancy across the world, technologies enhancing well-being of individuals, specifically for older people, are subject to a new stream of research and development. In this paper we present the ACCOMPANY project, a pan-European project which focuses on home companion technologies. The projects aims to progress beyond the state of the art in multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, monitoring persons and...

  16. A Multimodal Emotion Detection System during Human-Robot Interaction

    Science.gov (United States)

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  17. Two-Stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human-Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2-stages Hidden Markov Model. The 1st HMM is to recognize the prime command-like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  18. A receptionist robot for Brazilian people: study on interaction involving illiterates

    Directory of Open Access Journals (Sweden)

    Trovato Gabriele

    2017-04-01

    Full Text Available The receptionist job, consisting in providing useful indications to visitors in a public office, is one possible employment of social robots. The design and the behaviour of robots expected to be integrated in human societies are crucial issues, and they are dependent on the culture and society in which the robot should be deployed. We study the factors that could be used in the design of a receptionist robot in Brazil, a country with a mix of races and considerable gaps in economic and educational level. This inequality results in the presence of functional illiterate people, unable to use reading, writing and numeracy skills. We invited Brazilian people, including a group of functionally illiterate subjects, to interact with two types of receptionists differing in physical appearance (agent v mechanical robot and in the sound of the voice (human like v mechanical. Results gathered during the interactions point out a preference for the agent, for the human-like voice and a more intense reaction to stimuli by illiterates. These results provide useful indications that should be considered when designing a receptionist robot, as well as insights on the effect of illiteracy in the interaction.

  19. Social robotics to help children with autism in their interactions through imitation

    Directory of Open Access Journals (Sweden)

    Pennazio Valentina

    2017-06-01

    Full Text Available This article aims to reflect on the main variables that make social robotics efficient in an educational and rehabilitative intervention. Social robotics is based on imitation, and the study is designed for children affected by profound autism, aiming for the development of their social interactions. Existing research, at the national and international levels, shows how children with autism can interact more easily with a robotic companion rather than a human peer, considering its less complex and more predictable actions. This contribution also highlights how using robotic platforms helps in teaching children with autism basic social abilities, imitation, communication and interaction; this encourages them to transfer the learned abilities to human interactions with both adults and peers, through human–robot imitative modelling. The results of a pilot study conducted in a kindergarten school in the Liguria region are presented. The study included applying a robotic system, at first in a dyadic child–robot relation, then in a triadic one that also included another child, with the aim of eliciting social and imitative abilities in a child with profound autism.

  20. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction

    Science.gov (United States)

    2011-10-01

    directly affects the willingness of people to accept robot -produced information, follow robots ’ suggestions, and thus benefit from the advantages inherent...perceived complexity of operation). Consequently, if the perceived risk of using the robot exceeds its perceived benefit , practical operators almost...necessary presence of a human caregiver (Graf, Hans, & Schraft, 2004). Other robotic devices, such as wheelchairs (Yanco, 2001) and exoskeletons (e.g

  1. From Child-Robot Interaction to Child-Robot-Therapist Interaction: A Case Study in Autism

    Directory of Open Access Journals (Sweden)

    I. Giannopulu

    2012-01-01

    Full Text Available Troubles in social communication as well as deficits in the cognitive treatment of emotions are supposed to be a fundamental part of autism. We present a case study based on multimodal interaction between a mobile robot and a child with autism in spontaneous, free game play. This case study tells us that the robot mediates the interaction between the autistic child and therapist once the robot-child interaction has been established. In addition, the child uses the robot as a mediator to express positive emotion playing with the therapist. It is thought that the three-pronged interaction i.e., child-robot-therapist could better facilitate the transfer of social and emotional abilities to real life settings. Robot therapy has a high potential to improve the condition of brain activity in autistic children.

  2. An Interactive Astronaut-Robot System with Gesture Control

    Directory of Open Access Journals (Sweden)

    Jinguo Liu

    2016-01-01

    Full Text Available Human-robot interaction (HRI plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM is employed to recognize hand gestures and particle swarm optimization (PSO algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL have been selected and used to test and validate the performance of the proposed system.

  3. Cognitive neuroscience robotics A synthetic approaches to human understanding

    CERN Document Server

    Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi

    2016-01-01

    Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume A describes how human cognitive functions can be replicated in artificial systems such as robots, and investigates how artificial systems could acquire intelligent behaviors through interaction with others and their environment.

  4. A Multilayer Hidden Markov Models-Based Method for Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Chongben Tao

    2013-01-01

    Full Text Available To achieve Human-Robot Interaction (HRI by using gestures, a continuous gesture recognition approach based on Multilayer Hidden Markov Models (MHMMs is proposed, which consists of two parts. One part is gesture spotting and segment module, the other part is continuous gesture recognition module. Firstly, a Kinect sensor is used to capture 3D acceleration and 3D angular velocity data of hand gestures. And then, a Feed-forward Neural Networks (FNNs and a threshold criterion are used for gesture spotting and segment, respectively. Afterwards, the segmented gesture signals are respectively preprocessed and vector symbolized by a sliding window and a K-means clustering method. Finally, symbolized data are sent into Lower Hidden Markov Models (LHMMs to identify individual gestures, and then, a Bayesian filter with sequential constraints among gestures in Upper Hidden Markov Models (UHMMs is used to correct recognition errors created in LHMMs. Five predefined gestures are used to interact with a Kinect mobile robot in experiments. The experimental results show that the proposed method not only has good effectiveness and accuracy, but also has favorable real-time performance.

  5. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots.

    Science.gov (United States)

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.

  6. Socially intelligent robots that understand and respond to human touch

    NARCIS (Netherlands)

    Jung, Merel Madeleine

    Touch is an important nonverbal form of interpersonal interaction which is used to communicate emotions and other social messages. As interactions with social robots are likely to become more common in the near future these robots should also be able to engage in tactile interaction with humans.

  7. Measuring empathy for human and robot hand pain using electroencephalography.

    Science.gov (United States)

    Suzuki, Yutaka; Galli, Lisa; Ikeda, Ayaka; Itakura, Shoji; Kitazaki, Michiteru

    2015-11-03

    This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.

  8. A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

    Directory of Open Access Journals (Sweden)

    Juan A. Corrales

    2011-10-01

    Full Text Available Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach.

  9. Student teams practice for regional robotic competition at KSC

    Science.gov (United States)

    1999-01-01

    Student teams (right and left) behind protective walls maneuver their robots on the playing field during practice rounds of the 1999 Southeastern Regional robotic competition at Kennedy Space Center Visitor Complex . Thirty schools from around the country have converged at KSC for the event that pits gladiator robots against each other in an athletic-style competition. The robots have to retrieve pillow-like disks from the floor, as well as climb onto the platform (foreground) and raise the cache of pillows to a height of eight feet. KSC is hosting the event being sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.

  10. Using Empathy to Improve Human-Robot Relationships

    Science.gov (United States)

    Pereira, André; Leite, Iolanda; Mascarenhas, Samuel; Martinho, Carlos; Paiva, Ana

    For robots to become our personal companions in the future, they need to know how to socially interact with us. One defining characteristic of human social behaviour is empathy. In this paper, we present a robot that acts as a social companion expressing different kinds of empathic behaviours through its facial expressions and utterances. The robot comments the moves of two subjects playing a chess game against each other, being empathic to one of them and neutral towards the other. The results of a pilot study suggest that users to whom the robot was empathic perceived the robot more as a friend.

  11. A Distributed Tactile Sensor for Intuitive Human-Robot Interfacing

    Directory of Open Access Journals (Sweden)

    Andrea Cirillo

    2017-01-01

    Full Text Available Safety of human-robot physical interaction is enabled not only by suitable robot control strategies but also by suitable sensing technologies. For example, if distributed tactile sensors were available on the robot, they could be used not only to detect unintentional collisions, but also as human-machine interface by enabling a new mode of social interaction with the machine. Starting from their previous works, the authors developed a conformable distributed tactile sensor that can be easily conformed to the different parts of the robot body. Its ability to estimate contact force components and to provide a tactile map with an accurate spatial resolution enables the robot to handle both unintentional collisions in safe human-robot collaboration tasks and intentional touches where the sensor is used as human-machine interface. In this paper, the authors present the characterization of the proposed tactile sensor and they show how it can be also exploited to recognize haptic tactile gestures, by tailoring recognition algorithms, well known in the image processing field, to the case of tactile images. In particular, a set of haptic gestures has been defined to test three recognition algorithms on a group of 20 users. The paper demonstrates how the same sensor originally designed to manage unintentional collisions can be successfully used also as human-machine interface.

  12. ROILA : RObot Interaction LAnguage

    NARCIS (Netherlands)

    Mubin, O.

    2011-01-01

    The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such as Roomba or Nao, would be natural speech. However, the limitations

  13. 1st AAU Workshop on Human-Centered Robotics

    DEFF Research Database (Denmark)

    The 2012 AAU Workshop on Human-Centered Robotics took place on 15 Nov. 2012, at Aalborg University, Aalborg. The workshop provides a platform for robotics researchers, including professors, PhD and Master students to exchange their ideas and latest results. The objective is to foster closer...... interaction among researchers from multiple relevant disciplines in the human-centered robotics, and consequently, to promote collaborations across departments of all faculties towards making our center a center of excellence in robotics. The workshop becomes a great success, with 13 presentations, attracting...... more than 45 participants from AAU, SDU, DTI and industrial companies as well. The proceedings contain 7 full papers selected out from the full papers submitted afterwards on the basis of workshop abstracts. The papers represent major research development of robotics at AAU, including medical robots...

  14. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  15. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Directory of Open Access Journals (Sweden)

    Kristel Knaepen

    Full Text Available In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support. Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  16. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Science.gov (United States)

    Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain

    2015-01-01

    In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  17. Using Language Games as a Way to Investigate Interactional Engagement in Human-Robot Interaction

    DEFF Research Database (Denmark)

    Jensen, L. C.

    2016-01-01

    how students' engagement with a social robot can be systematically investigated and evaluated. For this purpose, I present a small user study in which a robot plays a word formation game with a human, in which engagement is determined by means of an analysis of the 'language games' played...

  18. Face and content validity of Xperience™ Team Trainer: bed-side assistant training simulator for robotic surgery.

    Science.gov (United States)

    Sessa, Luca; Perrenot, Cyril; Xu, Song; Hubert, Jacques; Bresler, Laurent; Brunaud, Laurent; Perez, Manuela

    2018-03-01

    In robotic surgery, the coordination between the console-side surgeon and bed-side assistant is crucial, more than in standard surgery or laparoscopy where the surgical team works in close contact. Xperience™ Team Trainer (XTT) is a new optional component for the dv-Trainer ® platform and simulates the patient-side working environment. We present preliminary results for face, content, and the workload imposed regarding the use of the XTT virtual reality platform for the psychomotor and communication skills training of the bed-side assistant in robot-assisted surgery. Participants were categorized into "Beginners" and "Experts". They tested a series of exercises (Pick & Place Laparoscopic Demo, Pick & Place 2 and Team Match Board 1) and completed face validity questionnaires. "Experts" assessed content validity on another questionnaire. All the participants completed a NASA Task Load Index questionnaire to assess the workload imposed by XTT. Twenty-one consenting participants were included (12 "Beginners" and 9 "Experts"). XTT was shown to possess face and content validity, as evidenced by the rankings given on the simulator's ease of use and realism parameters and on the simulator's usefulness for training. Eight out of nine "Experts" judged the visualization of metrics after the exercises useful. However, face validity has shown some weaknesses regarding interactions and instruments. Reasonable workload parameters were registered. XTT demonstrated excellent face and content validity with acceptable workload parameters. XTT could become a useful tool for robotic surgery team training.

  19. Drum-mate: interaction dynamics and gestures in human-humanoid drumming experiments

    Science.gov (United States)

    Kose-Bagci, Hatice; Dautenhahn, Kerstin; Syrdal, Dag S.; Nehaniv, Chrystopher L.

    2010-06-01

    This article investigates the role of interaction kinesics in human-robot interaction (HRI). We adopted a bottom-up, synthetic approach towards interactive competencies in robots using simple, minimal computational models underlying the robot's interaction dynamics. We present two empirical, exploratory studies investigating a drumming experience with a humanoid robot (KASPAR) and a human. In the first experiment, the turn-taking behaviour of the humanoid is deterministic and the non-verbal gestures of the robot accompany its drumming to assess the impact of non-verbal gestures on the interaction. The second experiment studies a computational framework that facilitates emergent turn-taking dynamics, whereby the particular dynamics of turn-taking emerge from the social interaction between the human and the humanoid. The results from the HRI experiments are presented and analysed qualitatively (in terms of the participants' subjective experiences) and quantitatively (concerning the drumming performance of the human-robot pair). The results point out a trade-off between the subjective evaluation of the drumming experience from the perspective of the participants and the objective evaluation of the drumming performance. A certain number of gestures was preferred as a motivational factor in the interaction. The participants preferred the models underlying the robot's turn-taking which enable the robot and human to interact more and provide turn-taking closer to 'natural' human-human conversations, despite differences in objective measures of drumming behaviour. The results are consistent with the temporal behaviour matching hypothesis previously proposed in the literature which concerns the effect that the participants adapt their own interaction dynamics to the robot's.

  20. Enhancing Perception with Tactile Object Recognition in Adaptive Grippers for Human–Robot Interaction

    Directory of Open Access Journals (Sweden)

    Juan M. Gandarias

    2018-02-01

    Full Text Available The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM. Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more, with a lower mean of pressure values (up to 72% less than when using a rigid sensor, with a softer grip, which is needed in physical human–robot interaction (pHRI. A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78% with a rigid sensor.

  1. Interactive robots in experimental biology.

    Science.gov (United States)

    Krause, Jens; Winfield, Alan F T; Deneubourg, Jean-Louis

    2011-07-01

    Interactive robots have the potential to revolutionise the study of social behaviour because they provide several methodological advances. In interactions with live animals, the behaviour of robots can be standardised, morphology and behaviour can be decoupled (so that different morphologies and behavioural strategies can be combined), behaviour can be manipulated in complex interaction sequences and models of behaviour can be embodied by the robot and thereby be tested. Furthermore, robots can be used as demonstrators in experiments on social learning. As we discuss here, the opportunities that robots create for new experimental approaches have far-reaching consequences for research in fields such as mate choice, cooperation, social learning, personality studies and collective behaviour. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    Science.gov (United States)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  3. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed

    Directory of Open Access Journals (Sweden)

    Anne Reuten

    2018-05-01

    Full Text Available Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley and comparable emotional responses between humans and robots (media equation. However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.

  4. A Decentralized Interactive Architecture for Aerial and Ground Mobile Robots Cooperation

    OpenAIRE

    Harik, El Houssein Chouaib; Guérin, François; Guinand, Frédéric; Brethé, Jean-François; Pelvillain, Hervé

    2014-01-01

    International audience; —This paper presents a novel decentralized interactive architecture for aerial and ground mobile robots cooperation. The aerial mobile robot is used to provide a global coverage during an area inspection, while the ground mobile robot is used to provide a local coverage of ground features. We include a human-in-the-loop to provide waypoints for the ground mobile robot to progress safely in the inspected area. The aerial mobile robot follows continuously the ground mobi...

  5. Multimodal human-machine interaction for service robots in home-care environments

    OpenAIRE

    Goetze, Stefan; Fischer, S.; Moritz, Niko; Appell, Jens-E.; Wallhoff, Frank

    2012-01-01

    This contribution focuses on multimodal interaction techniques for a mobile communication and assistance system on a robot platform. The system comprises of acoustic, visual and haptic input modalities. Feedback is given to the user by a graphical user interface and a speech synthesis system. By this, multimodal and natural communication with the robot system is possible.

  6. Social Moments: A Perspective on Interaction for Social Robotics

    OpenAIRE

    Durantin, Gautier; Heath, Scott; Wiles, Janet

    2017-01-01

    During a social interaction, events that happen at different timescales can indicate social meanings. In order to socially engage with humans, robots will need to be able to comprehend and manipulate the social meanings that are associated with these events. We define social moments as events that occur within a social interaction and which can signify a pragmatic or semantic meaning. A challenge for social robots is recognizing social moments that occur on short timescales, which can be on t...

  7. Presence of Life-Like Robot Expressions Influences Children’s Enjoyment of Human-Robot Interactions in the Field

    NARCIS (Netherlands)

    Cameron, David; Fernando, Samuel; Collins, Emily; Millings, Abigail; Moore, Roger; Sharkey, Amanda; Evers, Vanessa; Prescott, Tony

    Emotions, and emotional expression, have a broad influence on the interactions we have with others and are thus a key factor to consider in developing social robots. As part of a collaborative EU project, this study examined the impact of lifelike affective facial expressions, in the humanoid robot

  8. Human-Robot Interaction: Intention Recognition and Mutual Entrainment

    Science.gov (United States)

    2012-08-18

    bility, but only the human arm is modeled, with linear, low-pass-filter type transfer functions [16]. The coupled dynamics in pHRI has been intensively ...be located inside or on the edge of the polygon. IV. DISCUSSION A. Issues in Implementing LIPM on a Mobile Robot Instead of focusing on kinesiology

  9. Playte, a tangible interface for engaging human-robot interaction

    DEFF Research Database (Denmark)

    Christensen, David Johan; Fogh, Rune; Lund, Henrik Hautop

    2014-01-01

    This paper describes a tangible interface, Playte, designed for children animating interactive robots. The system supports physical manipulation of behaviors represented by LEGO bricks and allows the user to record and train their own new behaviors. Our objective is to explore several modes of in...

  10. Affective and Behavioral Responses to Robot-Initiated Social Touch : Toward Understanding the Opportunities and Limitations of Physical Contact in Human–Robot Interaction

    NARCIS (Netherlands)

    Willemse, Christian J. A. M.; Toet, Alexander; van Erp, Jan B. F.

    2017-01-01

    Social touch forms an important aspect of the human non-verbal communication repertoire, but is often overlooked in human-robot interaction. In this study, we investigated whether robot-initiated touches can induce physiological, emotional, and behavioral responses similar to those reported for

  11. Robotic Reconnaissance Missions to Small Bodies and Their Potential Contributions to Human Exploration

    Science.gov (United States)

    Abell, P. A.; Rivkin, A. S.

    2015-01-01

    Introduction: Robotic reconnaissance missions to small bodies will directly address aspects of NASA's Asteroid Initiative and will contribute to future human exploration. The NASA Asteroid Initiative is comprised of two major components: the Grand Challenge and the Asteroid Mission. The first component, the Grand Challenge, focuses on protecting Earth's population from asteroid impacts by detecting potentially hazardous objects with enough warning time to either prevent them from impacting the planet, or to implement civil defense procedures. The Asteroid Mission involves sending astronauts to study and sample a near- Earth asteroid (NEA) prior to conducting exploration missions of the Martian system, which includes Phobos and Deimos. The science and technical data obtained from robotic precursor missions that investigate the surface and interior physical characteristics of an object will help identify the pertinent physical properties that will maximize operational efficiency and reduce mission risk for both robotic assets and crew operating in close proximity to, or at the surface of, a small body. These data will help fill crucial strategic knowledge gaps (SKGs) concerning asteroid physical characteristics that are relevant for human exploration considerations at similar small body destinations. Small Body Strategic Knowledge Gaps: For the past several years NASA has been interested in identifying the key SKGs related to future human destinations. These SKGs highlight the various unknowns and/or data gaps of targets that the science and engineering communities would like to have filled in prior to committing crews to explore the Solar System. An action team from the Small Bodies Assessment Group (SBAG) was formed specifically to identify the small body SKGs under the direction of the Human Exploration and Operations Missions Directorate (HEOMD), given NASA's recent interest in NEAs and the Martian moons as potential human destinations [1]. The action team

  12. Experiments with a First Prototype of a Spatial Model of Cultural Meaning through Natural-Language Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Oliver Schürer

    2018-01-01

    Full Text Available When using assistive systems, the consideration of individual and cultural meaning is crucial for the utility and acceptance of technology. Orientation, communication and interaction are rooted in perception and therefore always happen in material space. We understand that a major problem lies in the difference between human and technical perception of space. Cultural policies are based on meanings including their spatial situation and their rich relationships. Therefore, we have developed an approach where the different perception systems share a hybrid spatial model that is generated by artificial intelligence—a joint effort by humans and assistive systems. The aim of our project is to create a spatial model of cultural meaning based on interaction between humans and robots. We define the role of humanoid robots as becoming our companions. This calls for technical systems to include still inconceivable human and cultural agendas for the perception of space. In two experiments, we tested a first prototype of the communication module that allows a humanoid to learn cultural meanings through a machine learning system. Interaction is achieved by non-verbal and natural-language communication between humanoids and test persons. This helps us to better understand how a spatial model of cultural meaning can be developed.

  13. R3D3 in the Wild: Using A Robot for Turn Management in Multi-Party Interaction with a Virtual Human

    NARCIS (Netherlands)

    Theune, Mariet; Wiltenburg, Daan; Bode, Max; Linssen, Jeroen

    R3D3 is a combination of a virtual human with a non-speaking robot capable of head gestures and emotive gaze behaviour. We use the robot to implement various turn management functions for use in multi-party interaction with R3D3, and present the results of a field study investigating their effects

  14. Singularity now: using the ventricular assist device as a model for future human-robotic physiology.

    Science.gov (United States)

    Martin, Archer K

    2016-04-01

    In our 21 st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today's world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named "IshBot") can best be studied in ventricular assist devices - VAD.

  15. Integrated Human-Robotic Missions to the Moon and Mars: Mission Operations Design Implications

    Science.gov (United States)

    Mishkin, Andrew; Lee, Young; Korth, David; LeBlanc, Troy

    2007-01-01

    For most of the history of space exploration, human and robotic programs have been independent, and have responded to distinct requirements. The NASA Vision for Space Exploration calls for the return of humans to the Moon, and the eventual human exploration of Mars; the complexity of this range of missions will require an unprecedented use of automation and robotics in support of human crews. The challenges of human Mars missions, including roundtrip communications time delays of 6 to 40 minutes, interplanetary transit times of many months, and the need to manage lifecycle costs, will require the evolution of a new mission operations paradigm far less dependent on real-time monitoring and response by an Earthbound operations team. Robotic systems and automation will augment human capability, increase human safety by providing means to perform many tasks without requiring immediate human presence, and enable the transfer of traditional mission control tasks from the ground to crews. Developing and validating the new paradigm and its associated infrastructure may place requirements on operations design for nearer-term lunar missions. The authors, representing both the human and robotic mission operations communities, assess human lunar and Mars mission challenges, and consider how human-robot operations may be integrated to enable efficient joint operations, with the eventual emergence of a unified exploration operations culture.

  16. Evidence of Self-Directed Learning on a High School Robotics Team

    Directory of Open Access Journals (Sweden)

    Nathan R. Dolenc

    2014-12-01

    Full Text Available Self-directed learning is described as an individual taking the initiative to engage in a learning experience while assuming responsibility to follow through to its conclusion. Robotics competitions are examples of informal environments that can facilitate self-directed learning. This study examined how mentor involvement, student behavior, and physical workspace contributed to self-directed learning on one robotics competition team. How did mentors transfer responsibility to students? How did students respond to managing a team? Are the physical attributes of a workspace important? The mentor, student, and workplace factors captured in the research showed mentors wanting students to do the work, students assuming leadership roles, and the limited workspace having a positive effect on student productivity.

  17. Children with Autism Spectrum Disorders Make a Fruit Salad with Probo, the Social Robot: An Interaction Study.

    Science.gov (United States)

    Simut, Ramona E; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram

    2016-01-01

    Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if the two conditions differed in their ability to elicit interaction with a human accompanying the child during the task. Interaction of the children with both partners did not differ apart from the eye-contact. Participants had more eye-contact with the social robot compared to the eye-contact with the human. The conditions did not differ regarding the interaction elicited with the human accompanying the child.

  18. Robotic Billiards: Understanding Humans in Order to Counter Them.

    Science.gov (United States)

    Nierhoff, Thomas; Leibrandt, Konrad; Lorenz, Tamara; Hirche, Sandra

    2016-08-01

    Ongoing technological advances in the areas of computation, sensing, and mechatronics enable robotic-based systems to interact with humans in the real world. To succeed against a human in a competitive scenario, a robot must anticipate the human behavior and include it in its own planning framework. Then it can predict the next human move and counter it accordingly, thus not only achieving overall better performance but also systematically exploiting the opponent's weak spots. Pool is used as a representative scenario to derive a model-based planning and control framework where not only the physics of the environment but also a model of the opponent is considered. By representing the game of pool as a Markov decision process and incorporating a model of the human decision-making based on studies, an optimized policy is derived. This enables the robot to include the opponent's typical game style into its tactical considerations when planning a stroke. The results are validated in simulations and real-life experiments with an anthropomorphic robot playing pool against a human.

  19. ALLIANCE: An architecture for fault tolerant multi-robot cooperation

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1995-02-01

    ALLIANCE is a software architecture that facilitates the fault tolerant cooperative control of teams of heterogeneous mobile robots performing missions composed of loosely coupled, largely independent subtasks. ALLIANCE allows teams of robots, each of which possesses a variety of high-level functions that it can perform during a mission, to individually select appropriate actions throughout the mission based on the requirements of the mission, the activities of other robots, the current environmental conditions, and the robot`s own internal states. ALLIANCE is a fully distributed, behavior-based architecture that incorporates the use of mathematically modeled motivations (such as impatience and acquiescence) within each robot to achieve adaptive action selection. Since cooperative robotic teams usually work in dynamic and unpredictable environments, this software architecture allows the robot team members to respond robustly, reliably, flexibly, and coherently to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. The feasibility of this architecture is demonstrated in an implementation on a team of mobile robots performing a laboratory version of hazardous waste cleanup.

  20. Building a grid-semantic map for the navigation of service robots through human–robot interaction

    Directory of Open Access Journals (Sweden)

    Cheng Zhao

    2015-11-01

    Full Text Available This paper presents an interactive approach to the construction of a grid-semantic map for the navigation of service robots in an indoor environment. It is based on the Robot Operating System (ROS framework and contains four modules, namely Interactive Module, Control Module, Navigation Module and Mapping Module. Three challenging issues have been focused during its development: (i how human voice and robot visual information could be effectively deployed in the mapping and navigation process; (ii how semantic names could combine with coordinate data in an online Grid-Semantic map; and (iii how a localization–evaluate–relocalization method could be used in global localization based on modified maximum particle weight of the particle swarm. A number of experiments are carried out in both simulated and real environments such as corridors and offices to verify its feasibility and performance.

  1. Fable: Socially Interactive Modular Robot

    DEFF Research Database (Denmark)

    Magnússon, Arnþór; Pacheco, Moises; Moghadam, Mikael

    2013-01-01

    Modular robots have a significant potential as user-reconfigurable robotic playware, but often lack sufficient sensing for social interaction. We address this issue with the Fable modular robotic system by exploring the use of smart sensor modules that has a better ability to sense the behavior...

  2. ALLIANCE: An architecture for fault tolerant multi-robot cooperation

    International Nuclear Information System (INIS)

    Parker, L.E.

    1995-02-01

    ALLIANCE is a software architecture that facilitates the fault tolerant cooperative control of teams of heterogeneous mobile robots performing missions composed of loosely coupled, largely independent subtasks. ALLIANCE allows teams of robots, each of which possesses a variety of high-level functions that it can perform during a mission, to individually select appropriate actions throughout the mission based on the requirements of the mission, the activities of other robots, the current environmental conditions, and the robot's own internal states. ALLIANCE is a fully distributed, behavior-based architecture that incorporates the use of mathematically modeled motivations (such as impatience and acquiescence) within each robot to achieve adaptive action selection. Since cooperative robotic teams usually work in dynamic and unpredictable environments, this software architecture allows the robot team members to respond robustly, reliably, flexibly, and coherently to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. The feasibility of this architecture is demonstrated in an implementation on a team of mobile robots performing a laboratory version of hazardous waste cleanup

  3. Human-robot interaction tests on a novel robot for gait assistance.

    Science.gov (United States)

    Tagliamonte, Nevio Luigi; Sergi, Fabrizio; Carpino, Giorgio; Accoto, Dino; Guglielmelli, Eugenio

    2013-06-01

    This paper presents tests on a treadmill-based non-anthropomorphic wearable robot assisting hip and knee flexion/extension movements using compliant actuation. Validation experiments were performed on the actuators and on the robot, with specific focus on the evaluation of intrinsic backdrivability and of assistance capability. Tests on a young healthy subject were conducted. In the case of robot completely unpowered, maximum backdriving torques were found to be in the order of 10 Nm due to the robot design features (reduced swinging masses; low intrinsic mechanical impedance and high-efficiency reduction gears for the actuators). Assistance tests demonstrated that the robot can deliver torques attracting the subject towards a predicted kinematic status.

  4. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  5. Effective Human-Robot Collaborative Work for Critical Missions

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to improve human-robot interaction (HRI) in order to enhance the capability of NASA critical missions. This research will focus two...

  6. Mobile Manipulation, Tool Use, and Intuitive Interaction for Cognitive Service Robot Cosero

    Directory of Open Access Journals (Sweden)

    Jörg Stückler

    2016-11-01

    Full Text Available Cognitive service robots that shall assist persons in need in performing their activities of daily living have recently received much attention in robotics research.Such robots require a vast set of control and perception capabilities to provide useful assistance through mobile manipulation and human-robot interaction.In this article, we present hardware design, perception, and control methods for our cognitive service robot Cosero.We complement autonomous capabilities with handheld teleoperation interfaces on three levels of autonomy.The robot demonstrated various advanced skills, including the use of tools.With our robot we participated in the annual international RoboCup@Home competitions, winning them three times in a row.

  7. The ultimatum game as measurement tool for anthropomorphism in human-robot interaction

    NARCIS (Netherlands)

    Torta, E.; Dijk, van E.T.; Ruijten, P.A.M.; Cuijpers, R.H.; Herrmann, G.; Pearson, M.J.; Lenz, A.; et al., xx

    2013-01-01

    Anthropomorphism is the tendency to attribute human characteristics to non–human entities. This paper presents exploratory work to evaluate how human responses during the ultimatum game vary according to the level of anthropomorphism of the opponent, which was either a human, a humanoid robot or a

  8. Communication of Robot Status to Improve Human-Robot Collaboration

    Data.gov (United States)

    National Aeronautics and Space Administration — Future space exploration will require humans and robots to collaborate to perform all the necessary tasks. Current robots mostly operate separately from humans due...

  9. Interactive language learning by robots: the transition from babbling to word forms.

    Science.gov (United States)

    Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe

    2012-01-01

    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language

  10. Pragmatic Frames for Teaching and Learning in Human–Robot Interaction: Review and Challenges

    Science.gov (United States)

    Vollmer, Anna-Lisa; Wrede, Britta; Rohlfing, Katharina J.; Oudeyer, Pierre-Yves

    2016-01-01

    One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning–teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human–human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching

  11. Humans and Robots. Educational Brief.

    Science.gov (United States)

    National Aeronautics and Space Administration, Washington, DC.

    This brief discusses human movement and robotic human movement simulators. The activity for students in grades 5-12 provides a history of robotic movement and includes making an End Effector for the robotic arms used on the Space Shuttle and the International Space Station (ISS). (MVL)

  12. Head Orientation Behavior of Users and Durations in Playful Open-Ended Interactions with an Android Robot

    DEFF Research Database (Denmark)

    Vlachos, Evgenios; Jochum, Elizabeth Ann; Schärfe, Henrik

    2016-01-01

    This paper presents the results of a field-experiment focused on the head orientation behavior of users in short-term dyadic interactions with an android (male) robot in a playful context, as well as on the duration of the interactions. The robotic trials took place in an art exhibition where...... subjects. The findings suggest that androids have the ability to maintain the focus of attention during short-term in-teractions within a playful context. This study provides an insight on how users communicate with an android robot, and on how to design meaningful human robot social interaction for real...

  13. Robots as Confederates

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    2016-01-01

    This paper addresses the use of robots in experimental research for the study of human language, human interaction, and human nature. It is argued that robots make excellent confederates that can be completely controlled, yet which engage human participants in interactions that allow us to study...... numerous linguistic and psychological variables in isolation in an ecologically valid way. Robots thus combine the advantages of observational studies and of controlled experimentation....

  14. Robot, human and communication; Robotto/ningen/comyunikeshon

    Energy Technology Data Exchange (ETDEWEB)

    Suehiro, T.

    1996-04-10

    Recently, some interests on the robots working with human beings under the same environment as the human beings and living with the human beings were promoting. In such robots, more suitability for environment and more robustness of system are required than those in conventional robots. Above all, communication of both the human beings and the robots on their cooperations is becoming a new problem. Hitherto, for the industrial robot, cooperation between human beings and robot was limited on its programming. As this was better for repeated operation of the same motion, its adoptable work was limited to some comparatively simpler one in factory and was difficult to change its content partially or to apply the other work. Furthermore, on the remote-controlled intelligent work robot represented by the critical work robot, its cooperation between the human beings and the robot can be conducted with the operation at remote location. In this paper, the communication of the robots lived with the human beings was examined. 17 refs., 1 fig.

  15. Designing human-robot collaborations in industry 4.0: explorative case studies

    DEFF Research Database (Denmark)

    Kadir, Bzhwen A; Broberg, Ole; Souza da Conceição, Carolina

    2018-01-01

    We are experiencing an increase in human-robot interactions and the use of collaborative robots (cobots) in industrial work systems. To make full use of cobots, it is essential to understand emerging challenges and opportunities. In this paper, we analyse three successful industrial case studies...... of cobots’ implementation. We highlight the top three challenges and opportunities, from the empirical evidence, relate them to current available literature on the topic, and use them to identify key design factor to consider when designing industrial work system with human-robot collaborations....

  16. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  17. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  18. A Self-Organizing Interaction and Synchronization Method between a Wearable Device and Mobile Robot.

    Science.gov (United States)

    Kim, Min Su; Lee, Jae Geun; Kang, Soon Ju

    2016-06-08

    In the near future, we can expect to see robots naturally following or going ahead of humans, similar to pet behavior. We call this type of robots "Pet-Bot". To implement this function in a robot, in this paper we introduce a self-organizing interaction and synchronization method between wearable devices and Pet-Bots. First, the Pet-Bot opportunistically identifies its owner without any human intervention, which means that the robot self-identifies the owner's approach on its own. Second, Pet-Bot's activity is synchronized with the owner's behavior. Lastly, the robot frequently encounters uncertain situations (e.g., when the robot goes ahead of the owner but meets a situation where it cannot make a decision, or the owner wants to stop the Pet-Bot synchronization mode to relax). In this case, we have adopted a gesture recognition function that uses a 3-D accelerometer in the wearable device. In order to achieve the interaction and synchronization in real-time, we use two wireless communication protocols: 125 kHz low-frequency (LF) and 2.4 GHz Bluetooth low energy (BLE). We conducted experiments using a prototype Pet-Bot and wearable devices to verify their motion recognition of and synchronization with humans in real-time. The results showed a guaranteed level of accuracy of at least 94%. A trajectory test was also performed to demonstrate the robot's control performance when following or leading a human in real-time.

  19. Evaluation by Expert Dancers of a Robot That Performs Partnered Stepping via Haptic Interaction.

    Directory of Open Access Journals (Sweden)

    Tiffany L Chen

    Full Text Available Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson's disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10 performed a forward/backward walking step to a recorded drum beat while holding the robot's end effectors. We varied the admittance gain of the robot's mobile base controller and the stiffness of the robot's arms. The robot followed the participants with low lag (M=224, SD=194 ms across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM distance, and interaction forces correlated with the expert dancers' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach's α=0.92. In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions.

  20. Evaluation by Expert Dancers of a Robot That Performs Partnered Stepping via Haptic Interaction

    Science.gov (United States)

    Chen, Tiffany L.; Bhattacharjee, Tapomayukh; McKay, J. Lucas; Borinski, Jacquelyn E.; Hackney, Madeleine E.; Ting, Lena H.; Kemp, Charles C.

    2015-01-01

    Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson's disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10) performed a forward/backward walking step to a recorded drum beat while holding the robot's end effectors. We varied the admittance gain of the robot's mobile base controller and the stiffness of the robot's arms. The robot followed the participants with low lag (M=224, SD=194 ms) across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM) distance, and interaction forces correlated with the expert dancers' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach's α=0.92). In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions. PMID:25993099

  1. The Influence of Social Interaction on the Perception of Emotional Expression: A Case Study with a Robot Head

    Science.gov (United States)

    Murray, John C.; Cañamero, Lola; Bard, Kim A.; Ross, Marina Davila; Thorsteinsson, Kate

    In this paper we focus primarily on the influence that socio-emotional interaction has on the perception of emotional expression by a robot. We also investigate and discuss the importance of emotion expression in socially interactive situations involving human robot interaction (HRI), and show the importance of utilising emotion expression when dealing with interactive robots, that are to learn and develop in socially situated environments. We discuss early expressional development and the function of emotion in communication in humans and how this can improve HRI communications. Finally we provide experimental results showing how emotion-rich interaction via emotion expression can affect the HRI process by providing additional information.

  2. Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.

    Science.gov (United States)

    Borenstein, Jason; Arkin, Ron

    2016-02-01

    Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to "nudge" their human users in the direction of being "more ethical". More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture "socially just" tendencies in their human counterparts. Designing technological artifacts in such a way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation.

  3. Robust Control of a Cable-Driven Soft Exoskeleton Joint for Intrinsic Human-Robot Interaction.

    Science.gov (United States)

    Jarrett, C; McDaid, A J

    2017-07-01

    A novel, cable-driven soft joint is presented for use in robotic rehabilitation exoskeletons to provide intrinsic, comfortable human-robot interaction. The torque-displacement characteristics of the soft elastomeric core contained within the joint are modeled. This knowledge is used in conjunction with a dynamic system model to derive a sliding mode controller (SMC) to implement low-level torque control of the joint. The SMC controller is experimentally compared with a baseline feedback-linearised proportional-derivative controller across a range of conditions and shown to be robust to un-modeled disturbances. The torque controller is then tested with six healthy subjects while they perform a selection of activities of daily living, which has validated its range of performance. Finally, a case study with a participant with spastic cerebral palsy is presented to illustrate the potential of both the joint and controller to be used in a physiotherapy setting to assist clinical populations.

  4. Designing robot embodiments for social interaction: Affordances topple realism and aesthetics

    NARCIS (Netherlands)

    Paauwe, R.A.; Hoorn, J.F.; Konijn, E.A.; Keyson, D.V.

    2015-01-01

    In the near future, human-like social robots will become indispensable for providing support in various social tasks, in particular for healthcare (e.g., assistance, coaching). The perception of realism, in particular human-like features, can help facilitate mediated social interaction. The current

  5. Designing Robot Embodiments for Social Interaction : Affordances Topple Realism and Aesthetics

    NARCIS (Netherlands)

    Paauwe, R.A.; Hoorn, J.F.; Konijn, E.A.; Keyson, D.V.

    2015-01-01

    In the near future, human-like social robots will become indispensable for providing support in various social tasks, in particular for healthcare (e.g., assistance, coaching). The perception of realism, in particular human-like features, can help facilitate mediated social interaction. The current

  6. Bio-inspired motion planning algorithms for autonomous robots facilitating greater plasticity for security applications

    Science.gov (United States)

    Guo, Yi; Hohil, Myron; Desai, Sachi V.

    2007-10-01

    Proposed are techniques toward using collaborative robots for infrastructure security applications by utilizing them for mobile sensor suites. A vast number of critical facilities/technologies must be protected against unauthorized intruders. Employing a team of mobile robots working cooperatively can alleviate valuable human resources. Addressed are the technical challenges for multi-robot teams in security applications and the implementation of multi-robot motion planning algorithm based on the patrolling and threat response scenario. A neural network based methodology is exploited to plan a patrolling path with complete coverage. Also described is a proof-of-principle experimental setup with a group of Pioneer 3-AT and Centibot robots. A block diagram of the system integration of sensing and planning will illustrate the robot to robot interaction to operate as a collaborative unit. The proposed approach singular goal is to overcome the limits of previous approaches of robots in security applications and enabling systems to be deployed for autonomous operation in an unaltered environment providing access to an all encompassing sensor suite.

  7. Structure Assembly by a Heterogeneous Team of Robots Using State Estimation, Generalized Joints, and Mobile Parallel Manipulators

    Science.gov (United States)

    Komendera, Erik E.; Adhikari, Shaurav; Glassner, Samantha; Kishen, Ashwin; Quartaro, Amy

    2017-01-01

    Autonomous robotic assembly by mobile field robots has seen significant advances in recent decades, yet practicality remains elusive. Identified challenges include better use of state estimation to and reasoning with uncertainty, spreading out tasks to specialized robots, and implementing representative joining methods. This paper proposes replacing 1) self-correcting mechanical linkages with generalized joints for improved applicability, 2) assembly serial manipulators with parallel manipulators for higher precision and stability, and 3) all-in-one robots with a heterogeneous team of specialized robots for agent simplicity. This paper then describes a general assembly algorithm utilizing state estimation. Finally, these concepts are tested in the context of solar array assembly, requiring a team of robots to assemble, bond, and deploy a set of solar panel mockups to a backbone truss to an accuracy not built into the parts. This paper presents the results of these tests.

  8. Quantifying the human-robot interaction forces between a lower limb exoskeleton and healthy users.

    Science.gov (United States)

    Rathore, Ashish; Wilcox, Matthew; Ramirez, Dafne Zuleima Morgado; Loureiro, Rui; Carlson, Tom

    2016-08-01

    To counter the many disadvantages of prolonged wheelchair use, patients with spinal cord injuries (SCI) are beginning to turn towards robotic exoskeletons. However, we are currently unaware of the magnitude and distribution of forces acting between the user and the exoskeleton. This is a critical issue, as SCI patients have an increased susceptibility to skin lesions and pressure ulcer development. Therefore, we developed a real-time force measuring apparatus, which was placed at the physical human-robot interface (pHRI) of a lower limb robotic exoskeleton. Experiments captured the dynamics of these interaction forces whilst the participants performed a range of typical stepping actions. Our results indicate that peak forces occurred at the anterior aspect of both the left and right legs, areas that are particularly prone to pressure ulcer development. A significant difference was also found between the average force experienced at the anterior and posterior sensors of the right thigh during the swing phase for different movement primitives. These results call for the integration of instrumented straps as standard in lower limb exoskeletons. They also highlight the potential of such straps to be used as an alternative/complementary interface for the high-level control of lower limb exoskeletons in some patient groups.

  9. Robots and humans: synergy in planetary exploration

    Science.gov (United States)

    Landis, Geoffrey A.

    2004-01-01

    How will humans and robots cooperate in future planetary exploration? Are humans and robots fundamentally separate modes of exploration, or can humans and robots work together to synergistically explore the solar system? It is proposed that humans and robots can work together in exploring the planets by use of telerobotic operation to expand the function and usefulness of human explorers, and to extend the range of human exploration to hostile environments. Published by Elsevier Ltd.

  10. Special Issue on Intelligent Robots

    Directory of Open Access Journals (Sweden)

    Genci Capi

    2013-08-01

    Full Text Available The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue.

  11. Physiological and subjective evaluation of a human-robot object hand-over task.

    Science.gov (United States)

    Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël

    2011-11-01

    In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.

  12. Humanoid Robots and Human Society

    OpenAIRE

    Bahishti, Adam A

    2017-01-01

    Almost every aspect of modern human life starting from the smartphone to the smart houses you live in has been influenced by science and technology. The field of science and technology has advanced throughout the last few decades. Among those advancements, robots have become significant by managing most of our day-to-day tasks and trying to get close to human lives. As robotics and autonomous systems flourish, human-robot relationships are becoming increasingly important. Recently humanoid ro...

  13. Mini AERCam Inspection Robot for Human Space Missions

    Science.gov (United States)

    Fredrickson, Steven E.; Duran, Steve; Mitchell, Jennifer D.

    2004-01-01

    The Engineering Directorate of NASA Johnson Space Center has developed a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam free flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35 pound, 14 inch AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations including automatic stationkeeping and point-to-point maneuvering. Mini AERCam is designed to fulfill the unique requirements and constraints associated with using a free flyer to perform external inspections and remote viewing of human spacecraft operations. This paper describes the application of Mini AERCam for stand-alone spacecraft inspection, as well as for roles on teams of humans and robots conducting future space exploration missions.

  14. When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions

    Science.gov (United States)

    Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman

    2012-01-01

    In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…

  15. Specific Human Capital as a Source of Superior Team Performance

    OpenAIRE

    Egon Franck; Stephan Nüesch; Jan Pieper

    2009-01-01

    In this paper, we empirically investigate the performance effect of team-specific human capital in highly interactive teams. Based on the tenets of the resource-based view of the firm and on the ideas of typical learning functions, we hypothesize that team members’ shared experience in working together positively impacts team performance, but at diminishing rates. Holding a team’s stock of general human capital and other potential drivers constant, we find support for this prediction. Implica...

  16. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.

    Science.gov (United States)

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which

  17. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human-Robot Interaction

    Science.gov (United States)

    2014-07-01

    however, 27 of these articles had insufficient information to calculate effect size. Authors were contacted via email and were given 5 weeks to... Multitasking Personality Robot Personality Communication Mode States Team Collaboration Fatigue Capability In-group Membership Stress

  18. Human futures amongst robot teachers?

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Bhroin, Niamh Ni; Ess, Charles Melvin

    2017-01-01

    In 2009 the world’s first robot teacher, Saya, was introduced into a classroom. Saya could express six basic emotions and shout orders like 'be quiet'. Since 2009, instructional robot technologies have emerged around the world and it is estimated that robot teachers may become a regular...... technological feature in the classroom and even 'take over' from human teachers within the next ten to fifteen years.   The paper set out to examine some of the possible ethical implications for human futures in relation to the immanent rise of robot teachers. This is done through combining perspectives...... on technology coming from design, science and technology, education, and philosophy (McCarthy & Wright, 2004; Jasanoff, 2016; Selwyn 2016; Verbeek, 2011). The framework calls attention to how particular robot teachers institute certain educational, experiential and existential terrains within which human...

  19. Influence of facial feedback during a cooperative human-robot task in schizophrenia.

    Science.gov (United States)

    Cohen, Laura; Khoramshahi, Mahdi; Salesse, Robin N; Bortolon, Catherine; Słowiński, Piotr; Zhai, Chao; Tsaneva-Atanasova, Krasimira; Di Bernardo, Mario; Capdevielle, Delphine; Marin, Ludovic; Schmidt, Richard C; Bardy, Benoit G; Billard, Aude; Raffard, Stéphane

    2017-11-03

    Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.

  20. Pilot Study of Person Robot Interaction in a Public Transit Space

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Bak, Thomas; Maler, Ouri

    2009-01-01

    This paper describes a study of the effect of a human interactive robot placed in an urban transit space. The underlying hypothesis is that it is possible to create interesting new living spaces and induce value in terms of experiences, information or economics, by putting socially interactive...... showed harder than expected to start interaction with commuters due to their determination and speed towards their goal. Further it was demonstrated that it was possible to track and follow people, who were not beforehand informed about the experiment. The evaluation indicated that the distance...... to initiate interaction was shorter than would be expected for normal human to human interaction....

  1. How does a surgeon’s brain buzz? An EEG coherence study on the interaction between humans and robot

    Science.gov (United States)

    2013-01-01

    Introduction In humans, both primary and non-primary motor areas are involved in the control of voluntary movements. However, the dynamics of functional coupling among different motor areas have not been fully clarified yet. There is to date no research looking to the functional dynamics in the brain of surgeons working in laparoscopy compared with those trained and working in robotic surgery. Experimental procedures We enrolled 16 right-handed trained surgeons and assessed changes in intra- and inter-hemispheric EEG coherence with a 32-channels device during the same motor task with either a robotic or a laparoscopic approach. Estimates of auto and coherence spectra were calculated by a fast Fourier transform algorithm implemented on Matlab 5.3. Results We found increase of coherence in surgeons performing laparoscopy, especially in theta and lower alpha activity, in all experimental conditions (M1 vs. SMA, S1 vs. SMA, S1 vs. pre-SMA and M1 vs. S1; p with different approaches. To the best of our knowledge, this is the first study that tried to assess semi-quantitative differences during the interaction between normal human brain and robotic devices. PMID:23607324

  2. Next Generation Simulation Framework for Robotic and Human Space Missions

    Science.gov (United States)

    Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven

    2012-01-01

    The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.

  3. Region of eye contact of humanoid Nao robot is similar to that of a human

    NARCIS (Netherlands)

    Cuijpers, R.H.; Pol, van der D.; Herrmann, G.; Pearson, M.J.; Lenz, A.; Bremner, P.; Spiers, A.; Leonards, U.

    2013-01-01

    Eye contact is an important social cue in human-human interaction, but it is unclear how easily it carries over to humanoid robots. In this study we investigated whether the tolerance of making eye contact is similar for the Nao robot as compared to human lookers. We measured the region of eye

  4. Child, Robot and Educational Material : A Triadic Interaction

    NARCIS (Netherlands)

    Davison, Daniel Patrick

    The process in which a child and a robot work together to solve a learning task can be characterised as a triadic interaction. Interactions between the child and robot; the child and learning materials; and the robot and learning materials will each shape the perception and appreciation the child

  5. Child, Robot and Educational Material: A Triadic Interaction

    NARCIS (Netherlands)

    Davison, Daniel Patrick

    The process in which a child and a robot work together to solve a learning task can be characterised as a triadic interaction. Interactions between the child and robot; the child and learning materials; and the robot and learning materials will each shape the perception and appreciation the child

  6. Progress in EEG-Based Brain Robot Interaction Systems

    Directory of Open Access Journals (Sweden)

    Xiaoqian Mao

    2017-01-01

    Full Text Available The most popular noninvasive Brain Robot Interaction (BRI technology uses the electroencephalogram- (EEG- based Brain Computer Interface (BCI, to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques.

  7. Fuzzy variable impedance control based on stiffness identification for human-robot cooperation

    Science.gov (United States)

    Mao, Dachao; Yang, Wenlong; Du, Zhijiang

    2017-06-01

    This paper presents a dynamic fuzzy variable impedance control algorithm for human-robot cooperation. In order to estimate the intention of human for co-manipulation, a fuzzy inference system is set up to adjust the impedance parameter. Aiming at regulating the output fuzzy universe based on the human arm’s stiffness, an online stiffness identification method is developed. A drag interaction task is conducted on a 5-DOF robot with variable impedance control. Experimental results demonstrate that the proposed algorithm is superior.

  8. Augmented Robotics Dialog System for Enhancing Human–Robot Interaction

    Science.gov (United States)

    Alonso-Martín, Fernando; Castro-González, Aívaro; de Gorostiza Luengo, Francisco Javier Fernandez; Salichs, Miguel Ángel

    2015-01-01

    Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human–robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human–robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications. PMID:26151202

  9. Human-robot collaboration for a shared mission

    OpenAIRE

    Karami , Abir-Beatrice; Jeanpierre , Laurent; Mouaddib , Abdel-Illah

    2010-01-01

    International audience; We are interested in collaboration domains between a robot and a human partner, the partners share a common mission without an explicit communication about their plans. The decision process of the robot agent should consider the presence of its human partner. Also, the robot planning should be flexible to human comfortability and all possible changes in the shared environment. To solve the problem of human-robot collaborationwith no communication, we present a model th...

  10. Human Factors Principles in Design of Computer-Mediated Visualization for Robot Missions

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; David J Bruemmer

    2008-12-01

    With increased use of robots as a resource in missions supporting countermine, improvised explosive devices (IEDs), and chemical, biological, radiological nuclear and conventional explosives (CBRNE), fully understanding the best means by which to complement the human operator’s underlying perceptual and cognitive processes could not be more important. Consistent with control and display integration practices in many other high technology computer-supported applications, current robotic design practices rely highly upon static guidelines and design heuristics that reflect the expertise and experience of the individual designer. In order to use what we know about human factors (HF) to drive human robot interaction (HRI) design, this paper reviews underlying human perception and cognition principles and shows how they were applied to a threat detection domain.

  11. Toward Multimodal Human-Robot Interaction to Enhance Active Participation of Users in Gait Rehabilitation.

    Science.gov (United States)

    Gui, Kai; Liu, Honghai; Zhang, Dingguo

    2017-11-01

    Robotic exoskeletons for physical rehabilitation have been utilized for retraining patients suffering from paraplegia and enhancing motor recovery in recent years. However, users are not voluntarily involved in most systems. This paper aims to develop a locomotion trainer with multiple gait patterns, which can be controlled by the active motion intention of users. A multimodal human-robot interaction (HRI) system is established to enhance subject's active participation during gait rehabilitation, which includes cognitive HRI (cHRI) and physical HRI (pHRI). The cHRI adopts brain-computer interface based on steady-state visual evoked potential. The pHRI is realized via admittance control based on electromyography. A central pattern generator is utilized to produce rhythmic and continuous lower joint trajectories, and its state variables are regulated by cHRI and pHRI. A custom-made leg exoskeleton prototype with the proposed multimodal HRI is tested on healthy subjects and stroke patients. The results show that voluntary and active participation can be effectively involved to achieve various assistive gait patterns.

  12. Toward a tactile language for human-robot interaction: two studies of tacton learning and performance.

    Science.gov (United States)

    Barber, Daniel J; Reinerman-Jones, Lauren E; Matthews, Gerald

    2015-05-01

    Two experiments were performed to investigate the feasibility for robot-to-human communication of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence. Improvements in autonomous systems technology and a growing demand within military operations are spurring interest in communication via vibrotactile displays. Tactile communication may become an important element of human-robot interaction (HRI), but it requires the development of messaging capabilities approaching the communication power of the speech and visual signals used in the military. In Experiment 1 (N = 38), we trained participants to identify sets of directional, dynamic, and static tactons and tested performance and workload following training. In Experiment 2 (N = 76), we introduced an extended training procedure and tested participants' ability to correctly identify two-tacton phrases. We also investigated the impact of multitasking on performance and workload. Individual difference factors were assessed. Experiment 1 showed that participants found dynamic and static tactons difficult to learn, but the enhanced training procedure in Experiment 2 produced competency in performance for all tacton categories. Participants in the latter study also performed well on two-tacton phrases and when multitasking. However, some deficits in performance and elevation of workload were observed. Spatial ability predicted some aspects of performance in both studies. Participants may be trained to identify both single tactons and tacton phrases, demonstrating the feasibility of developing a tactile language for HRI. Tactile communication may be incorporated into multi-modal communication systems for HRI. It also has potential for human-human communication in challenging environments. © 2014, Human Factors and Ergonomics Society.

  13. An Interactive Human Interface Arm Robot with the Development of Food Aid

    Directory of Open Access Journals (Sweden)

    NASHWAN D. Zaki

    2012-03-01

    Full Text Available A robotic system for the disabled who needs supports at meal is proposed. A feature of this system is that the robotic aid system can communicate with the operator using the speech recognition and speech synthesis functions. Another feature is that the robotic aid system uses an image processing, and by doing this the system can recognize the environmental situations of the dishes, cups and so on. Due to this image processing function, the operator does not need to specify the position and the posture of the dishes and target objects. Furthermore, combination communication between speech and image processing will enables a friendly man-machine to communicate with each other, since speech and visual information are essential in the human communication.

  14. Human-machine Interface for Presentation Robot

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Ondroušek, V.

    2012-01-01

    Roč. 6, č. 2 (2012), s. 17-21 ISSN 1897-8649 Institutional research plan: CEZ:AV0Z20760514 Keywords : human-robot interface * mobile robot * presentation robot Subject RIV: JD - Computer Applications, Robotics

  15. A remote lab for experiments with a team of mobile robots.

    Science.gov (United States)

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-09-04

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab.

  16. A Remote Lab for Experiments with a Team of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Marco Casini

    2014-09-01

    Full Text Available In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab.

  17. Advances in Robotics and Virtual Reality

    CERN Document Server

    Hassanien, Aboul

    2012-01-01

    A beyond human knowledge and reach, robotics is strongly involved in tackling challenges of new emerging multidisciplinary fields. Together with humans, robots are busy exploring and working on the new generation of ideas and problems whose solution is otherwise impossible to find. The future is near when robots will sense, smell and touch people and their lives. Behind this practical aspect of human-robotics, there is a half a century spanned robotics research, which transformed robotics into a modern science. The Advances in Robotics and Virtual Reality is a compilation of emerging application areas of robotics. The book covers robotics role in medicine, space exploration and also explains the role of virtual reality as a non-destructive test bed which constitutes a premise of further advances towards new challenges in robotics. This book, edited by two famous scientists with the support of an outstanding team of fifteen authors, is a well suited reference for robotics researchers and scholars from related ...

  18. Interaction dynamics of multiple mobile robots with simple navigation strategies

    Science.gov (United States)

    Wang, P. K. C.

    1989-01-01

    The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.

  19. Nasa's Ant-Inspired Swarmie Robots

    Science.gov (United States)

    Leucht, Kurt W.

    2016-01-01

    As humans push further beyond the grasp of earth, robotic missions in advance of human missions will play an increasingly important role. These robotic systems will find and retrieve valuable resources as part of an in-situ resource utilization (ISRU) strategy. They will need to be highly autonomous while maintaining high task performance levels. NASA Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots to be used as a ground-based research platform for ISRU missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in a previously unmapped environment and return those resources to a central site. This talk will guide the audience through the Swarmie robot project from its conception by students in a New Mexico research lab to its robot trials in an outdoor parking lot at NASA. The software technologies and techniques used on the project will be discussed, as well as various challenges and solutions that were encountered by the development team along the way.

  20. Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents

    Science.gov (United States)

    2016-07-27

    SECURITY CLASSIFICATION OF: Brain Computer Interfaces (BCIs) show great potential in allowing humans to interact with computational environments in a...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot...published in peer-reviewed journals: Number of Papers published in non peer-reviewed journals: Final Report: Brain Computer Interfaces for Enhanced

  1. Fundamentals of soft robot locomotion.

    Science.gov (United States)

    Calisti, M; Picardi, G; Laschi, C

    2017-05-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).

  2. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    Science.gov (United States)

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under

  3. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    Directory of Open Access Journals (Sweden)

    Eva Wiese

    2017-10-01

    Full Text Available Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a fostering feelings of social connection, empathy and prosociality, and by (b enhancing performance on joint human–robot tasks. Lastly, we describe

  4. Designing Emotionally Expressive Robots

    DEFF Research Database (Denmark)

    Tsiourti, Christiana; Weiss, Astrid; Wac, Katarzyna

    2017-01-01

    Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid...... robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper -a highly human-like robot and Hobbit – a robot...... for robots....

  5. Detecting Biological Motion for Human–Robot Interaction: A Link between Perception and Action

    Directory of Open Access Journals (Sweden)

    Alessia Vignolo

    2017-06-01

    Full Text Available One of the fundamental skills supporting safe and comfortable interaction between humans is their capability to understand intuitively each other’s actions and intentions. At the basis of this ability is a special-purpose visual processing that human brain has developed to comprehend human motion. Among the first “building blocks” enabling the bootstrapping of such visual processing is the ability to detect movements performed by biological agents in the scene, a skill mastered by human babies in the first days of their life. In this paper, we present a computational model based on the assumption that such visual ability must be based on local low-level visual motion features, which are independent of shape, such as the configuration of the body and perspective. Moreover, we implement it on the humanoid robot iCub, embedding it into a software architecture that leverages the regularities of biological motion also to control robot attention and oculomotor behaviors. In essence, we put forth a model in which the regularities of biological motion link perception and action enabling a robotic agent to follow a human-inspired sensory-motor behavior. We posit that this choice facilitates mutual understanding and goal prediction during collaboration, increasing the pleasantness and safety of the interaction.

  6. Reconnaissance and Autonomy for Small Robots (RASR) team: MAGIC 2010 challenge

    Science.gov (United States)

    Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark; Corley, Katrina

    2012-06-01

    The Reconnaissance and Autonomy for Small Robots (RASR) team developed a system for the coordination of groups of unmanned ground vehicles (UGVs) that can execute a variety of military relevant missions in dynamic urban environments. Historically, UGV operations have been primarily performed via tele-operation, requiring at least one dedicated operator per robot, and requiring substantial real-time bandwidth to accomplish those missions. Our team goal was to develop a system that can provide long-term value to the war-fighter, utilizing MAGIC-2010 as a stepping stone. To that end, we self-imposed a set of constraints that would force us to develop technology that could readily be used by the military in the near term: • Use a relevant (deployed) platform • Use low-cost, reliable sensors • Develop an expandable and modular control system with innovative software algorithms to minimize the computing footprint required • Minimize required communications bandwidth and handle communication losses • Minimize additional power requirements to maximize battery life and mission duration

  7. Human Assisted Robotic Vehicle Studies - A conceptual end-to-end mission architecture

    Science.gov (United States)

    Lehner, B. A. E.; Mazzotta, D. G.; Teeney, L.; Spina, F.; Filosa, A.; Pou, A. Canals; Schlechten, J.; Campbell, S.; Soriano, P. López

    2017-11-01

    With current space exploration roadmaps indicating the Moon as a proving ground on the way to human exploration of Mars, it is clear that human-robotic partnerships will play a key role for successful future human space missions. This paper details a conceptual end-to-end architecture for an exploration mission in cis-lunar space with a focus on human-robot interactions, called Human Assisted Robotic Vehicle Studies (HARVeSt). HARVeSt will build on knowledge of plant growth in space gained from experiments on-board the ISS and test the first growth of plants on the Moon. A planned deep space habitat will be utilised as the base of operations for human-robotic elements of the mission. The mission will serve as a technology demonstrator not only for autonomous tele-operations in cis-lunar space but also for key enabling technologies for future human surface missions. The successful approach of the ISS will be built on in this mission with international cooperation. Mission assets such as a modular rover will allow for an extendable mission and to scout and prepare the area for the start of an international Moon Village.

  8. Avoiding Local Optima with Interactive Evolutionary Robotics

    Science.gov (United States)

    2012-07-09

    the top of a flight of stairs selects for climbing ; suspending the robot and the target object above the ground and creating rungs between the two will...REPORT Avoiding Local Optimawith Interactive Evolutionary Robotics 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The main bottleneck in evolutionary... robotics has traditionally been the time required to evolve robot controllers. However with the continued acceleration in computational resources, the

  9. Children with Autism Spectrum Disorders Make a Fruit Salad with Probo, the Social Robot: An Interaction Study

    Science.gov (United States)

    Simut, Ramona E.; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram

    2016-01-01

    Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if…

  10. Acceptance and Attitudes Toward a Human-like Socially Assistive Robot by Older Adults.

    Science.gov (United States)

    Louie, Wing-Yue Geoffrey; McColl, Derek; Nejat, Goldie

    2014-01-01

    Recent studies have shown that cognitive and social interventions are crucial to the overall health of older adults including their psychological, cognitive, and physical well-being. However, due to the rapidly growing elderly population of the world, the resources and people to provide these interventions is lacking. Our work focuses on the use of social robotic technologies to provide person-centered cognitive interventions. In this article, we investigate the acceptance and attitudes of older adults toward the human-like expressive socially assistive robot Brian 2.1 in order to determine if the robot's human-like assistive and social characteristics would promote the use of the robot as a cognitive and social interaction tool to aid with activities of daily living. The results of a robot acceptance questionnaire administered during a robot demonstration session with a group of 46 elderly adults showed that the majority of the individuals had positive attitudes toward the socially assistive robot and its intended applications.

  11. Exploring multimodal robotic interaction through storytelling for aphasics

    NARCIS (Netherlands)

    Mubin, O.; Al Mahmud, A.; Abuelma'atti, O.; England, D.

    2008-01-01

    In this poster, we propose the design of a multimodal robotic interaction mechanism that is intended to be used by Aphasics for storytelling. Through limited physical interaction, mild to moderate aphasic people can interact with a robot that may help them to be more active in their day to day

  12. Application of Human-Autonomy Teaming (HAT) Patterns to Reduce Crew Operations (RCO)

    Science.gov (United States)

    Shively, R. Jay; Brandt, Summer L.; Lachter, Joel; Matessa, Mike; Sadler, Garrett; Battiste, Henri

    2016-01-01

    Unmanned aerial systems, robotics, advanced cockpits, and air traffic management are all examples of domains that are seeing dramatic increases in automation. While automation may take on some tasks previously performed by humans, humans will still be required, for the foreseeable future, to remain in the system. The collaboration with humans and these increasingly autonomous systems will begin to resemble cooperation between teammates, rather than simple task allocation. It is critical to understand this human-autonomy teaming (HAT) to optimize these systems in the future. One methodology to understand HAT is by identifying recurring patterns of HAT that have similar characteristics and solutions. This paper applies a methodology for identifying HAT patterns to an advanced cockpit project.

  13. A taxonomy for user-healthcare robot interaction.

    Science.gov (United States)

    Bzura, Conrad; Im, Hosung; Liu, Tammy; Malehorn, Kevin; Padir, Taskin; Tulu, Bengisu

    2012-01-01

    This paper evaluates existing taxonomies aimed at characterizing the interaction between robots and their users and modifies them for health care applications. The modifications are based on existing robot technologies and user acceptance of robotics. Characterization of the user, or in this case the patient, is a primary focus of the paper, as they present a unique new role as robot users. While therapeutic and monitoring-related applications for robots are still relatively uncommon, we believe they will begin to grow and thus it is important that the spurring relationship between robot and patient is well understood.

  14. Visual and tactile interfaces for bi-directional human robot communication

    Science.gov (United States)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  15. ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1995-02-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.

  16. Parents' Appraisals of the Animacy and Likability of Socially Interactive Robots for Intervening with Young Children with Disabilities. Social Robots Research Reports, Number 2

    Science.gov (United States)

    Dunst, Carl J.; Trivette, Carol M.; Prior, Jeremy; Hamby, Deborah W.; Embler, Davon

    2013-01-01

    Findings from a survey of parents' ratings of seven different human-like qualities of four socially interactive robots are reported. The four robots were Popchilla, Keepon, Kaspar, and CosmoBot. The participants were 96 parents and other primary caregivers of young children with disabilities 1 to 12 years of age. Results showed that Popchilla, a…

  17. A design-centred framework for social human-robot interaction

    NARCIS (Netherlands)

    Bartneck, C.; Forlizzi, J.

    2004-01-01

    Robots currently integrate into our everyday lives, but little is known about how they can act socially. In this paper, we propose a definition of social robots and describe a framework that classifies properties of social robots. The properties consist of form, modality, social norms, autonomy, and

  18. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    Science.gov (United States)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.

  19. Effects of Interruptibility-Aware Robot Behavior

    OpenAIRE

    Banerjee, Siddhartha; Silva, Andrew; Feigh, Karen; Chernova, Sonia

    2018-01-01

    As robots become increasingly prevalent in human environments, there will inevitably be times when a robot needs to interrupt a human to initiate an interaction. Our work introduces the first interruptibility-aware mobile robot system, and evaluates the effects of interruptibility-awareness on human task performance, robot task performance, and on human interpretation of the robot's social aptitude. Our results show that our robot is effective at predicting interruptibility at high accuracy, ...

  20. Improving Human/Autonomous System Teaming Through Linguistic Analysis

    Science.gov (United States)

    Meszaros, Erica L.

    2016-01-01

    An area of increasing interest for the next generation of aircraft is autonomy and the integration of increasingly autonomous systems into the national airspace. Such integration requires humans to work closely with autonomous systems, forming human and autonomous agent teams. The intention behind such teaming is that a team composed of both humans and autonomous agents will operate better than homogenous teams. Procedures exist for licensing pilots to operate in the national airspace system and current work is being done to define methods for validating the function of autonomous systems, however there is no method in place for assessing the interaction of these two disparate systems. Moreover, currently these systems are operated primarily by subject matter experts, limiting their use and the benefits of such teams. Providing additional information about the ongoing mission to the operator can lead to increased usability and allow for operation by non-experts. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables the prediction of the operator's intent and allows the interface to supply the operator's desired information.

  1. Industrial Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Philipsen, Mark Philip; Rehm, Matthias; Moeslund, Thomas B.

    2018-01-01

    In the future, robots are envisioned to work side by side with humans in dynamic environments both in production contexts but also more and more in societal context like health care, education, or commerce. This will require robots to become socially accepted, to become able to analyze human...... intentions in meaningful ways, and to become proactive. It is our conviction that this can only be achieved on the basis of a tight combination of multimodal signal processing and AI techniques in real application context....

  2. Automatic approach to stabilization and control for multi robot teams by multilayer network operator

    Directory of Open Access Journals (Sweden)

    Diveev Askhat

    2016-01-01

    Full Text Available The paper describes a novel methodology for synthesis a high-level control of autonomous multi robot teams. The approach is based on multilayer network operator method that belongs to a symbolic regression class. Synthesis is accomplished in three steps: stabilizing robots about some given position in a state space, finding optimal trajectories of robots’ motion as sets of stabilizing points and then approximating all the points of optimal trajectories by some multi-dimensional function of state variables. The feasibility and effectiveness of the proposed approach is verified on simulations of the task of control synthesis for three mobile robots parking in the constrained space.

  3. Cognitive Tools for Humanoid Robots in Space

    National Research Council Canada - National Science Library

    Sofge, Donald; Perzanowski, Dennis; Skubic, Marjorie; Bugajska, Magdalena; Trafton, J. G; Cassimatis, Nicholas; Brock, Derek; Adams, William; Schultz, Alan

    2004-01-01

    .... The key to achieving this interaction is to provide the robot with sufficient skills for natural communication with humans so that humans can interact with the robot almost as though it were another human...

  4. Human exploration and settlement of Mars - The roles of humans and robots

    Science.gov (United States)

    Duke, Michael B.

    1991-01-01

    The scientific objectives and strategies for human settlement on Mars are examined in the context of the Space Exploration Initiative (SEI). An integrated strategy for humans and robots in the exploration and settlement of Mars is examined. Such an effort would feature robotic, telerobotic, and human-supervised robotic phases.

  5. Robotic and Human-Tended Collaborative Drilling Automation for Subsurface Exploration

    Science.gov (United States)

    Glass, Brian; Cannon, Howard; Stoker, Carol; Davis, Kiel

    2005-01-01

    , either between a robotic drill and humans on Earth, or a human-tended drill and its visiting crew. The Mars Analog Rio Tinto Experiment (MARTE) is a current project that studies and simulates the remote science operations between an automated drill in Spain and a distant, distributed human science team. The Drilling Automation for Mars Exploration (DAME) project, by contrast: is developing and testing standalone automation at a lunar/martian impact crater analog site in Arctic Canada. The drill hardware in both projects is a hardened, evolved version of the Advanced Deep Drill (ADD) developed by Honeybee Robotics for the Mars Subsurface Program. The current ADD is capable of 20m, and the DAME project is developing diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The current drill automation architecture being developed by NASA and tested in 2004-06 at analog sites in the Arctic and Spain will add downhole diagnosis of different strata, bit wear detection, and dynamic replanning capabilities when unexpected failures or drilling conditions are discovered in conjunction with simulated mission operations and remote science planning. The most important determinant of future 1unar and martian drilling automation and staffing requirements will be the actual performance of automated prototype drilling hardware systems in field trials in simulated mission operations. It is difficult to accurately predict the level of automation and human interaction that will be needed for a lunar-deployed drill without first having extensive experience with the robotic control of prototype drill systems under realistic analog field conditions. Drill-specific failure modes and software design flaws will become most apparent at this stage. DAME will develop and test drill automation software and hardware under stressful operating conditions during several planned field campaigns. Initial results from summer 2004 tests show seven identifi

  6. IMPERA: Integrated Mission Planning for Multi-Robot Systems

    Directory of Open Access Journals (Sweden)

    Daniel Saur

    2015-10-01

    Full Text Available This paper presents the results of the project IMPERA (Integrated Mission Planning for Distributed Robot Systems. The goal of IMPERA was to realize an extraterrestrial exploration scenario using a heterogeneous multi-robot system. The main challenge was the development of a multi-robot planning and plan execution architecture. The robot team consists of three heterogeneous robots, which have to explore an unknown environment and collect lunar drill samples. The team activities are described using the language ALICA (A Language for Interactive Agents. Furthermore, we use the mission planning system pRoPhEt MAS (Reactive Planning Engine for Multi-Agent Systems to provide an intuitive interface to generate team activities. Therefore, we define the basic skills of our team with ALICA and define the desired goal states by using a logic description. Based on the skills, pRoPhEt MAS creates a valid ALICA plan, which will be executed by the team. The paper describes the basic components for communication, coordinated exploration, perception and object transportation. Finally, we evaluate the planning engine pRoPhEt MAS in the IMPERA scenario. In addition, we present further evaluation of pRoPhEt MAS in more dynamic environments.

  7. Enabling Effective Human-Robot Interaction Using Perspective-Taking in Robots

    National Research Council Canada - National Science Library

    Trafton, J. G; Cassimatis, Nicholas L; Bugajska, Magdalena D; Brock, Derek P; Mintz, Farilee E; Schultz, Alan C

    2005-01-01

    ...) and present a cognitive architecture for performing perspective-taking called Polyscheme. Finally, we show a fully integrated system that instantiates our theoretical framework within a working robot system...

  8. Robot assistant versus human or another robot assistant in patients undergoing laparoscopic cholecystectomy.

    Science.gov (United States)

    Gurusamy, Kurinchi Selvan; Samraj, Kumarakrishnan; Fusai, Giuseppe; Davidson, Brian R

    2012-09-12

    The role of a robotic assistant in laparoscopic cholecystectomy is controversial. While some trials have shown distinct advantages of a robotic assistant over a human assistant others have not, and it is unclear which robotic assistant is best. The aims of this review are to assess the benefits and harms of a robot assistant versus human assistant or versus another robot assistant in laparoscopic cholecystectomy, and to assess whether the robot can substitute the human assistant. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE, and Science Citation Index Expanded (until February 2012) for identifying the randomised clinical trials. Only randomised clinical trials (irrespective of language, blinding, or publication status) comparing robot assistants versus human assistants in laparoscopic cholecystectomy were considered for the review. Randomised clinical trials comparing different types of robot assistants were also considered for the review. Two authors independently identified the trials for inclusion and independently extracted the data. We calculated the risk ratio (RR) or mean difference (MD) with 95% confidence interval (CI) using the fixed-effect and the random-effects models based on intention-to-treat analysis, when possible, using Review Manager 5. We included six trials with 560 patients. One trial involving 129 patients did not state the number of patients randomised to the two groups. In the remaining five trials 431 patients were randomised, 212 to the robot assistant group and 219 to the human assistant group. All the trials were at high risk of bias. Mortality and morbidity were reported in only one trial with 40 patients. There was no mortality or morbidity in either group. Mortality and morbidity were not reported in the remaining trials. Quality of life or the proportion of patients who were discharged as day-patient laparoscopic cholecystectomy patients were not reported in any

  9. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Directory of Open Access Journals (Sweden)

    Addythia Saphala

    2015-10-01

    Full Text Available Robotic Operation System (ROS is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR, which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM.

  10. Robots and Humans in Planetary Exploration: Working Together?

    Science.gov (United States)

    Landis, Geoffrey A.; Lyons, Valerie (Technical Monitor)

    2002-01-01

    Today's approach to human-robotic cooperation in planetary exploration focuses on using robotic probes as precursors to human exploration. A large portion of current NASA planetary surface exploration is focussed on Mars, and robotic probes are seen as precursors to human exploration in: Learning about operation and mobility on Mars; Learning about the environment of Mars; Mapping the planet and selecting landing sites for human mission; Demonstration of critical technology; Manufacture fuel before human presence, and emplace elements of human-support infrastructure

  11. Humor in Human-Computer Interaction : A Short Survey

    NARCIS (Netherlands)

    Nijholt, Anton; Niculescu, Andreea; Valitutti, Alessandro; Banchs, Rafael E.; Joshi, Anirudha; Balkrishan, Devanuj K.; Dalvi, Girish; Winckler, Marco

    2017-01-01

    This paper is a short survey on humor in human-computer interaction. It describes how humor is designed and interacted with in social media, virtual agents, social robots and smart environments. Benefits and future use of humor in interactions with artificial entities are discussed based on

  12. Human Hand Motion Analysis and Synthesis of Optimal Power Grasps for a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Francesca Cordella

    2014-03-01

    Full Text Available Biologically inspired robotic systems can find important applications in biomedical robotics, since studying and replicating human behaviour can provide new insights into motor recovery, functional substitution and human-robot interaction. The analysis of human hand motion is essential for collecting information about human hand movements useful for generalizing reaching and grasping actions on a robotic system. This paper focuses on the definition and extraction of quantitative indicators for describing optimal hand grasping postures and replicating them on an anthropomorphic robotic hand. A motion analysis has been carried out on six healthy human subjects performing a transverse volar grasp. The extracted indicators point to invariant grasping behaviours between the involved subjects, thus providing some constraints for identifying the optimal grasping configuration. Hence, an optimization algorithm based on the Nelder-Mead simplex method has been developed for determining the optimal grasp configuration of a robotic hand, grounded on the aforementioned constraints. It is characterized by a reduced computational cost. The grasp stability has been tested by introducing a quality index that satisfies the form-closure property. The grasping strategy has been validated by means of simulation tests and experimental trials on an arm-hand robotic system. The obtained results have shown the effectiveness of the extracted indicators to reduce the non-linear optimization problem complexity and lead to the synthesis of a grasping posture able to replicate the human behaviour while ensuring grasp stability. The experimental results have also highlighted the limitations of the adopted robotic platform (mainly due to the mechanical structure to achieve the optimal grasp configuration.

  13. A Software Framework for Coordinating Human-Robot Teams, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Robots are expected to fulfill an important role in manned exploration operations. They will perform precursor missions to pre-position resources for manned...

  14. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  15. Approaching human performance the functionality-driven Awiwi robot hand

    CERN Document Server

    Grebenstein, Markus

    2014-01-01

    Humanoid robotics have made remarkable progress since the dawn of robotics. So why don't we have humanoid robot assistants in day-to-day life yet? This book analyzes the keys to building a successful humanoid robot for field robotics, where collisions become an unavoidable part of the game. The author argues that the design goal should be real anthropomorphism, as opposed to mere human-like appearance. He deduces three major characteristics to aim for when designing a humanoid robot, particularly robot hands: _ Robustness against impacts _ Fast dynamics _ Human-like grasping and manipulation performance   Instead of blindly copying human anatomy, this book opts for a holistic design me-tho-do-lo-gy. It analyzes human hands and existing robot hands to elucidate the important functionalities that are the building blocks toward these necessary characteristics.They are the keys to designing an anthropomorphic robot hand, as illustrated in the high performance anthropomorphic Awiwi Hand presented in this book.  ...

  16. Action and language integration: from humans to cognitive robots.

    Science.gov (United States)

    Borghi, Anna M; Cangelosi, Angelo

    2014-07-01

    The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction. Copyright © 2014 Cognitive Science Society, Inc.

  17. Emotion attribution to a non-humanoid robot in different social situations.

    Directory of Open Access Journals (Sweden)

    Gabriella Lakatos

    Full Text Available In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human-animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour ("happiness" and "fear", and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.

  18. Robot Teachers

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Ess, Charles Melvin; Bhroin, Niamh Ni

    The world's first robot teacher, Saya, was introduced to a classroom in Japan in 2009. Saya, had the appearance of a young female teacher. She could express six basic emotions, take the register and shout orders like 'be quiet' (The Guardian, 2009). Since 2009, humanoid robot technologies have...... developed. It is now suggested that robot teachers may become regular features in educational settings, and may even 'take over' from human teachers in ten to fifteen years (cf. Amundsen, 2017 online; Gohd, 2017 online). Designed to look and act like a particular kind of human; robot teachers mediate human...... existence and roles, while also aiming to support education through sophisticated, automated, human-like interaction. Our paper explores the design and existential implications of ARTIE, a robot teacher at Oxford Brookes University (2017, online). Drawing on an initial empirical exploration we propose...

  19. Synthetic Teammates as Team Players: Coordination of Human and Synthetic Teammates

    Science.gov (United States)

    2016-05-31

    teammate interactions with human teammates reveal about human-automation coordination needs? 15. SUBJECT TERMS synthetic teammate, human- autonomy teaming...interacting with autonomy - not autonomous vehicles, but autonomous teammates. These experiments have led to a number of discoveries including: 1...given the preponderance of text-based communications in our society and its adoption in time critical military and civilian contexts, the

  20. Predictive Mechanisms Are Not Involved the Same Way during Human-Human vs. Human-Machine Interactions: A Review

    Directory of Open Access Journals (Sweden)

    Aïsha Sahaï

    2017-10-01

    Full Text Available Nowadays, interactions with others do not only involve human peers but also automated systems. Many studies suggest that the motor predictive systems that are engaged during action execution are also involved during joint actions with peers and during other human generated action observation. Indeed, the comparator model hypothesis suggests that the comparison between a predicted state and an estimated real state enables motor control, and by a similar functioning, understanding and anticipating observed actions. Such a mechanism allows making predictions about an ongoing action, and is essential to action regulation, especially during joint actions with peers. Interestingly, the same comparison process has been shown to be involved in the construction of an individual's sense of agency, both for self-generated and observed other human generated actions. However, the implication of such predictive mechanisms during interactions with machines is not consensual, probably due to the high heterogeneousness of the automata used in the experimentations, from very simplistic devices to full humanoid robots. The discrepancies that are observed during human/machine interactions could arise from the absence of action/observation matching abilities when interacting with traditional low-level automata. Consistently, the difficulties to build a joint agency with this kind of machines could stem from the same problem. In this context, we aim to review the studies investigating predictive mechanisms during social interactions with humans and with automated artificial systems. We will start by presenting human data that show the involvement of predictions in action control and in the sense of agency during social interactions. Thereafter, we will confront this literature with data from the robotic field. Finally, we will address the upcoming issues in the field of robotics related to automated systems aimed at acting as collaborative agents.

  1. Impact of Robotic Surgery on Decision Making: Perspectives of Surgical Teams.

    Science.gov (United States)

    Randell, Rebecca; Alvarado, Natasha; Honey, Stephanie; Greenhalgh, Joanne; Gardner, Peter; Gill, Arron; Jayne, David; Kotze, Alwyn; Pearman, Alan; Dowding, Dawn

    2015-01-01

    There has been rapid growth in the purchase of surgical robots in both North America and Europe in recent years. Whilst this technology promises many benefits for patients, the introduction of such a complex interactive system into healthcare practice often results in unintended consequences that are difficult to predict. Decision making by surgeons during an operation is affected by variables including tactile perception, visual perception, motor skill, and instrument complexity, all of which are changed by robotic surgery, yet the impact of robotic surgery on decision making has not been previously studied. Drawing on the approach of realist evaluation, we conducted a multi-site interview study across nine hospitals, interviewing 44 operating room personnel with experience of robotic surgery to gather their perspectives on how robotic surgery impacts surgeon decision making. The findings reveal both potential benefits and challenges of robotic surgery for decision making.

  2. Human-Robot Teamwork in USAR Environments: The TRADR Project

    NARCIS (Netherlands)

    Greeff, J. de; Hindriks, K.; Neerincx, M.A.; Kruijff-Korbayova, I.

    2015-01-01

    The TRADR project aims at developing methods and models for human-robot teamwork, enabling robots to operate in search and rescue environments alongside humans as teammates, rather than as tools. Through a user-centered cognitive engineering method, human-robot teamwork is analyzed, modeled,

  3. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    Directory of Open Access Journals (Sweden)

    Thomas Mergner

    2018-05-01

    Full Text Available Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1 four basic physical disturbances (support surface (SS tilt and translation, field and contact forces may affect the balancing in any given degree of freedom (DoF. Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2 Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3 Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking

  4. Touch versus In-Air Hand Gestures: Evaluating the Acceptance by Seniors of Human-Robot Interaction

    NARCIS (Netherlands)

    Znagui Hassani, Anouar; van Dijk, Elisabeth M.A.G.; Ludden, Geke Dina Simone; Eertink, Henk

    2011-01-01

    Do elderly people have a preference between performing inair gestures or pressing screen buttons to interact with an assistive robot? This study attempts to provide answers to this question by measuring the level of acceptance, performance as well as knowledge of both interaction modalities during a

  5. Mobile robot competition. Underground mining: A challenging application in mobile robotics

    CSIR Research Space (South Africa)

    Green, J

    2011-09-01

    Full Text Available . The competition aspires to grow in subsequent years in size and complexity as the universities and their teams grow in capability. In this way, the human capital moving into mining robotics will grow in number and capability, so supplying an expanding market....

  6. Integrating surgical robots into the next medical toolkit.

    Science.gov (United States)

    Lai, Fuji; Entin, Eileen

    2006-01-01

    Surgical robots hold much promise for revolutionizing the field of surgery and improving surgical care. However, despite the potential advantages they offer, there are multiple barriers to adoption and integration into practice that may prevent these systems from realizing their full potential benefit. This study elucidated some of the most salient considerations that need to be addressed for integration of new technologies such as robotic systems into the operating room of the future as it evolves into a complex system of systems. We conducted in-depth interviews with operating room team members and other stakeholders to identify potential barriers in areas of workflow, teamwork, training, clinical acceptance, and human-system interaction. The findings of this study will inform an approach for the design and integration of robotics and related computer-assisted technologies into the next medical toolkit for "computer-enhanced surgery" to improve patient safety and healthcare quality.

  7. The University Rover Challenge: A competition highlighting Human and Robotic partnerships for exploration

    Science.gov (United States)

    Smith, Heather; Duncan, Andrew

    2016-07-01

    The University Rover Challenge began in 2006 with 4 American college teams competing, now in it's 10th year there are 63 teams from 12 countries registered to compete for the top rover designed to assist humans in the exploration of Mars. The Rovers compete aided by the University teams in four tasks (3 engineering and 1 science) in the Mars analog environment of the Utah Southern Desert in the United States. In this presentation we show amazing rover designs with videos demonstrating the incredible ingenuity, skill and determination of the world's most talented college students. We describe the purpose and results of each of the tasks: Astronaut Assistant, Rover Dexterity, Terrain maneuvering, and Science. We explain the evolution of the competition and common challenges faced by the robotic explorers

  8. Emotion Attribution to a Non-Humanoid Robot in Different Social Situations

    Science.gov (United States)

    Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám

    2014-01-01

    In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot. PMID:25551218

  9. Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations

    Science.gov (United States)

    Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana

    2011-01-01

    We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.

  10. Improving therapeutic outcomes in autism spectrum disorders: Enhancing social communication and sensory processing through the use of interactive robots.

    Science.gov (United States)

    Sartorato, Felippe; Przybylowski, Leon; Sarko, Diana K

    2017-07-01

    For children with autism spectrum disorders (ASDs), social robots are increasingly utilized as therapeutic tools in order to enhance social skills and communication. Robots have been shown to generate a number of social and behavioral benefits in children with ASD including heightened engagement, increased attention, and decreased social anxiety. Although social robots appear to be effective social reinforcement tools in assistive therapies, the perceptual mechanism underlying these benefits remains unknown. To date, social robot studies have primarily relied on expertise in fields such as engineering and clinical psychology, with measures of social robot efficacy principally limited to qualitative observational assessments of children's interactions with robots. In this review, we examine a range of socially interactive robots that currently have the most widespread use as well as the utility of these robots and their therapeutic effects. In addition, given that social interactions rely on audiovisual communication, we discuss how enhanced sensory processing and integration of robotic social cues may underlie the perceptual and behavioral benefits that social robots confer. Although overall multisensory processing (including audiovisual integration) is impaired in individuals with ASD, social robot interactions may provide therapeutic benefits by allowing audiovisual social cues to be experienced through a simplified version of a human interaction. By applying systems neuroscience tools to identify, analyze, and extend the multisensory perceptual substrates that may underlie the therapeutic benefits of social robots, future studies have the potential to strengthen the clinical utility of social robots for individuals with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Cognitive Human-Machine Interface Applied in Remote Support for Industrial Robot Systems

    Directory of Open Access Journals (Sweden)

    Tomasz Kosicki

    2013-10-01

    Full Text Available An attempt is currently being made to widely introduce industrial robots to Small-Medium Enterprises (SMEs. Since the enterprises usually employ too small number of robot units to afford specialized departments for robot maintenance, they must be provided with inexpensive and immediate support remotely. This paper evaluates whether the support can be provided by means of Cognitive Info-communication – communication in which human cognitive capabilities are extended irrespectively of geographical distances. The evaluations are given with an aid of experimental system that consists of local and remote rooms, which are physically separated – a six-degree-of-freedom NACHI SH133-03 industrial robot is situated in the local room, while the operator, who supervises the robot by means of audio-visual Cognitive Human-Machine Interface, is situated in the remote room. The results of simple experiments show that Cognitive Info-communication is not only efficient mean to provide the support remotely, but is probably also a powerful tool to enhance interaction with any data-rich environment that require good conceptual understanding of system's state and careful attention management. Furthermore, the paper discusses data presentation and reduction methods for data-rich environments, as well as introduces the concepts of Naturally Acquired Data and Cognitive Human-Machine Interfaces.

  12. Robot learning from human teachers

    CERN Document Server

    Chernova, Sonia

    2014-01-01

    Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn f

  13. Acceptance of an assistive robot in older adults: a mixed-method study of human–robot interaction over a 1-month period in the Living Lab setting

    Directory of Open Access Journals (Sweden)

    Wu YH

    2014-05-01

    Full Text Available Ya-Huei Wu,1,2 Jérémy Wrobel,1,2 Mélanie Cornuet,1,2 Hélène Kerhervé,1,2 Souad Damnée,1,2 Anne-Sophie Rigaud1,21Hôpital Broca, Assistance Publique – Hôpitaux de Paris, 2Research Team 4468, Faculté de Médecine, Université Paris Descartes, Paris, FranceBackground: There is growing interest in investigating acceptance of robots, which are increasingly being proposed as one form of assistive technology to support older adults, maintain their independence, and enhance their well-being. In the present study, we aimed to observe robot-acceptance in older adults, particularly subsequent to a 1-month direct experience with a robot.Subjects and methods: Six older adults with mild cognitive impairment (MCI and five cognitively intact healthy (CIH older adults were recruited. Participants interacted with an assistive robot in the Living Lab once a week for 4 weeks. After being shown how to use the robot, participants performed tasks to simulate robot use in everyday life. Mixed methods, comprising a robot-acceptance questionnaire, semistructured interviews, usability-performance measures, and a focus group, were used.Results: Both CIH and MCI subjects were able to learn how to use the robot. However, MCI subjects needed more time to perform tasks after a 1-week period of not using the robot. Both groups rated similarly on the robot-acceptance questionnaire. They showed low intention to use the robot, as well as negative attitudes toward and negative images of this device. They did not perceive it as useful in their daily life. However, they found it easy to use, amusing, and not threatening. In addition, social influence was perceived as powerful on robot adoption. Direct experience with the robot did not change the way the participants rated robots in their acceptance questionnaire. We identified several barriers to robot-acceptance, including older adults’ uneasiness with technology, feeling of stigmatization, and ethical

  14. Coordination Mechanisms for Human-Robot Teams in Space

    Data.gov (United States)

    National Aeronautics and Space Administration — A major challenge of coordination in space environments is that teams are often spatially separated and operate at different time scales. Currently, there are few...

  15. Proxemics models for human-aware navigation in robotics: Grounding interaction and personal space models in experimental data from psychology

    OpenAIRE

    Barnaud , Marie-Lou; Morgado , Nicolas; Palluel-Germain , Richard; Diard , Julien; Spalanzani , Anne

    2014-01-01

    International audience; In order to navigate in a social environment, a robot must be aware of social spaces, which include proximity and interaction-based constraints. Previous models of interaction and personal spaces have been inspired by studies in social psychology but not systematically grounded and validated with respect to experimental data. We propose to implement personal and interaction space models in order to replicate a classical psychology experiment. Our robotic simulations ca...

  16. Multi-function robots with speech interaction and emotion feedback

    Science.gov (United States)

    Wang, Hongyu; Lou, Guanting; Ma, Mengchao

    2018-03-01

    Nowadays, the service robots have been applied in many public circumstances; however, most of them still don’t have the function of speech interaction, especially the function of speech-emotion interaction feedback. To make the robot more humanoid, Arduino microcontroller was used in this study for the speech recognition module and servo motor control module to achieve the functions of the robot’s speech interaction and emotion feedback. In addition, W5100 was adopted for network connection to achieve information transmission via Internet, providing broad application prospects for the robot in the area of Internet of Things (IoT).

  17. Architecture for Multiple Interacting Robot Intelligences

    Science.gov (United States)

    Peters, Richard Alan, II (Inventor)

    2008-01-01

    An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a database associative memory (DBAM) that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.

  18. Human Factors and Robotics: Current Status and Future Prospects.

    Science.gov (United States)

    Parsons, H. McIlvaine; Kearsley, Greg P.

    The principal human factors engineering issue in robotics is the division of labor between automation (robots) and human beings. This issue reflects a prime human factors engineering consideration in systems design--what equipment should do and what operators and maintainers should do. Understanding of capabilities and limitations of robots and…

  19. From robot to human grasping simulation

    CERN Document Server

    León, Beatriz; Sancho-Bru, Joaquin

    2013-01-01

    The human hand and its dexterity in grasping and manipulating objects are some of the hallmarks of the human species. For years, anatomic and biomechanical studies have deepened the understanding of the human hand’s functioning and, in parallel, the robotics community has been working on the design of robotic hands capable of manipulating objects with a performance similar to that of the human hand. However, although many researchers have partially studied various aspects, to date there has been no comprehensive characterization of the human hand’s function for grasping and manipulation of

  20. Anthropomorphism in Human–Robot Co-evolution

    Directory of Open Access Journals (Sweden)

    Luisa Damiano

    2018-03-01

    Full Text Available Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.

  1. Anthropomorphism in Human–Robot Co-evolution

    Science.gov (United States)

    Damiano, Luisa; Dumouchel, Paul

    2018-01-01

    Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth. PMID:29632507

  2. Evidence Report, Risk of Inadequate Design of Human and Automation/Robotic Integration

    Science.gov (United States)

    Zumbado, Jennifer Rochlis; Billman, Dorrit; Feary, Mike; Green, Collin

    2011-01-01

    The success of future exploration missions depends, even more than today, on effective integration of humans and technology (automation and robotics). This will not emerge by chance, but by design. Both crew and ground personnel will need to do more demanding tasks in more difficult conditions, amplifying the costs of poor design and the benefits of good design. This report has looked at the importance of good design and the risks from poor design from several perspectives: 1) If the relevant functions needed for a mission are not identified, then designs of technology and its use by humans are unlikely to be effective: critical functions will be missing and irrelevant functions will mislead or drain attention. 2) If functions are not distributed effectively among the (multiple) participating humans and automation/robotic systems, later design choices can do little to repair this: additional unnecessary coordination work may be introduced, workload may be redistributed to create problems, limited human attentional resources may be wasted, and the capabilities of both humans and technology underused. 3) If the design does not promote accurate understanding of the capabilities of the technology, the operators will not use the technology effectively: the system may be switched off in conditions where it would be effective, or used for tasks or in contexts where its effectiveness may be very limited. 4) If an ineffective interaction design is implemented and put into use, a wide range of problems can ensue. Many involve lack of transparency into the system: operators may be unable or find it very difficult to determine a) the current state and changes of state of the automation or robot, b) the current state and changes in state of the system being controlled or acted on, and c) what actions by human or by system had what effects. 5) If the human interfaces for operation and control of robotic agents are not designed to accommodate the unique points of view and

  3. Human centric object perception for service robots

    NARCIS (Netherlands)

    Alargarsamy Balasubramanian, A.C.

    2016-01-01

    The research interests and applicability of robotics have diversified and seen a
    tremendous growth in recent years. There has been a shift from industrial robots operating in constrained settings to consumer robots working in dynamic environments associated closely with everyday human

  4. Smooth leader or sharp follower? Playing the mirror game with a robot.

    Science.gov (United States)

    Kashi, Shir; Levy-Tzedek, Shelly

    2018-01-01

    The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. We set out to test people's preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions.

  5. Human-like Compliance for Dexterous Robot Hands

    Science.gov (United States)

    Jau, Bruno M.

    1995-01-01

    This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications.

  6. Ocular interaction with robots: an aid to the disabled

    International Nuclear Information System (INIS)

    Azorin, J.M.; Ianez, E.; Fernandez Jover, E.; Sabater, J.M.

    2010-01-01

    This paper describes a technique to control remotely a robot arm from his eyes movement. This method will help disabled people to control a robot in order to aid them to perform tasks in their daily lives. The electrooculography technique (EOG) is used to detect the eyes movement. EOG registers the potential difference between the cornea and the retina using electrodes. The eyes movement is used to control a remote robot arm of 6 degrees of freedom. First, the paper introduces several eye movement techniques to interact with devices, focusing on the EOG one. Then, the paper describes the system that allows interacting with a robot through the eyes movement. Finally, the paper shows some experimental results related to the robot controlled by the EOG-based interface. (Author).

  7. Molecular Robots Obeying Asimov's Three Laws of Robotics.

    Science.gov (United States)

    Kaminka, Gal A; Spokoini-Stern, Rachel; Amir, Yaniv; Agmon, Noa; Bachelet, Ido

    2017-01-01

    Asimov's three laws of robotics, which were shaped in the literary work of Isaac Asimov (1920-1992) and others, define a crucial code of behavior that fictional autonomous robots must obey as a condition for their integration into human society. While, general implementation of these laws in robots is widely considered impractical, limited-scope versions have been demonstrated and have proven useful in spurring scientific debate on aspects of safety and autonomy in robots and intelligent systems. In this work, we use Asimov's laws to examine these notions in molecular robots fabricated from DNA origami. We successfully programmed these robots to obey, by means of interactions between individual robots in a large population, an appropriately scoped variant of Asimov's laws, and even emulate the key scenario from Asimov's story "Runaround," in which a fictional robot gets into trouble despite adhering to the laws. Our findings show that abstract, complex notions can be encoded and implemented at the molecular scale, when we understand robots on this scale on the basis of their interactions.

  8. A new method to evaluate human-robot system performance

    Science.gov (United States)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  9. Digital Emotion : How Audiences React to Robots on Screen

    OpenAIRE

    Damian Schofield,

    2018-01-01

    The experience of interacting with robots is becoming a more pervasive part of our day-to-day life. When considering the experience of interacting with other technologies and artefacts, interaction with robots presents a distinct and potentially unique component: physical connection. Robots share our physical space; this is a prominent part of the interaction experience. Robots offer a lifelike presence and the Human-Robot Interaction (HRI) issues go beyond the traditional interactions of mor...

  10. Interactive animated displayed of man-controlled and autonomous robots

    International Nuclear Information System (INIS)

    Crane, C.D. III; Duffy, J.

    1986-01-01

    An interactive computer graphics program has been developed which allows an operator to more readily control robot motions in two distinct modes; viz., man-controlled and autonomous. In man-controlled mode, the robot is guided by a joystick or similar device. As the robot moves, actual joint angle information is measured and supplied to a graphics system which accurately duplicates the robot motion. Obstacles are placed in the actual and animated workspace and the operator is warned of imminent collisions by sight and sound via the graphics system. Operation of the system in man-controlled mode is shown. In autonomous mode, a collision-free path between specified points is obtained by previewing robot motions on the graphics system. Once a satisfactory path is selected, the path characteristics are transmitted to the actual robot and the motion is executed. The telepresence system developed at the University of Florida has been successful in demonstrating that the concept of controlling a robot manipulator with the aid of an interactive computer graphics system is feasible and practical. The clarity of images coupled with real-time interaction and real-time determination of imminent collision with obstacles has resulted in improved operator performance. Furthermore, the ability for an operator to preview and supervise autonomous operations is a significant attribute when operating in a hazardous environment

  11. Cross-cultural study on human-robot greeting interaction : acceptance and discomfort by Egyptians and Japanese

    OpenAIRE

    Trovato, G.; Zecca, M.; Sessa, S.; Jamone, L.; Ham, J.R.C.; Hashimoto, K.; Takanishi, A.

    2013-01-01

    As witnessed in several behavioural studies, a complex relationship exists between people’s cultural background and their general acceptance towards robots. However, very few studies have investigated whether a robot’s original language and gesture based on certain culture have an impact on the people of the different cultures. The purpose of this work is to provide experimental evidence which supports the idea that humans may accept more easily a robot that can adapt to their specific cultur...

  12. Application of Human-Autonomy Teaming (HAT) Patterns to Reduced Crew Operations (RCO)

    Science.gov (United States)

    Shively, R. Jay; Brandt, Summer L.; Lachter, Joel; Matessa, Mike; Sadler, Garrett; Battiste, Henri

    2016-01-01

    As part of the Air Force - NASA Bi-Annual Research Council Meeting, slides will be presented on recent Reduced Crew Operations (RCO) work. Unmanned aerial systems, robotics, advanced cockpits, and air traffic management are all examples of domains that are seeing dramatic increases in automation. While automation may take on some tasks previously performed by humans, humans will still be required, for the foreseeable future, to remain in the system. The collaboration with humans and these increasingly autonomous systems will begin to resemble cooperation between teammates, rather than simple task allocation. It is critical to understand this human-autonomy teaming (HAT) to optimize these systems in the future. One methodology to understand HAT is by identifying recurring patterns of HAT that have similar characteristics and solutions. A methodology for identifying HAT patterns to an advanced cockpit project is discussed.

  13. Robotic Missions to Small Bodies and Their Potential Contributions to Human Exploration and Planetary Defense

    Science.gov (United States)

    Abell, Paul A.; Rivkin, Andrew S.

    2015-01-01

    Introduction: Robotic missions to small bodies will directly address aspects of NASA's Asteroid Initiative and will contribute to future human exploration and planetary defense. The NASA Asteroid Initiative is comprised of two major components: the Grand Challenge and the Asteroid Mission. The first component, the Grand Challenge, focuses on protecting Earth's population from asteroid impacts by detecting potentially hazardous objects with enough warning time to either prevent them from impacting the planet, or to implement civil defense procedures. The Asteroid Mission involves sending astronauts to study and sample a near-Earth asteroid (NEA) prior to conducting exploration missions of the Martian system, which includes Phobos and Deimos. The science and technical data obtained from robotic precursor missions that investigate the surface and interior physical characteristics of an object will help identify the pertinent physical properties that will maximize operational efficiency and reduce mission risk for both robotic assets and crew operating in close proximity to, or at the surface of, a small body. These data will help fill crucial strategic knowledge gaps (SKGs) concerning asteroid physical characteristics that are relevant for human exploration considerations at similar small body destinations. These data can also be applied for gaining an understanding of pertinent small body physical characteristics that would also be beneficial for formulating future impact mitigation procedures. Small Body Strategic Knowledge Gaps: For the past several years NASA has been interested in identifying the key SKGs related to future human destinations. These SKGs highlight the various unknowns and/or data gaps of targets that the science and engineering communities would like to have filled in prior to committing crews to explore the Solar System. An action team from the Small Bodies Assessment Group (SBAG) was formed specifically to identify the small body SKGs under the

  14. Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)

    Science.gov (United States)

    Baroff, Lynn; Dischinger, Charlie; Fitts, David

    2009-01-01

    Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.

  15. Predicting the long-term effects of human-robot interaction: a reflection on responsibility in medical robotics.

    Science.gov (United States)

    Datteri, Edoardo

    2013-03-01

    This article addresses prospective and retrospective responsibility issues connected with medical robotics. It will be suggested that extant conceptual and legal frameworks are sufficient to address and properly settle most retrospective responsibility problems arising in connection with injuries caused by robot behaviours (which will be exemplified here by reference to harms occurred in surgical interventions supported by the Da Vinci robot, reported in the scientific literature and in the press). In addition, it will be pointed out that many prospective responsibility issues connected with medical robotics are nothing but well-known robotics engineering problems in disguise, which are routinely addressed by roboticists as part of their research and development activities: for this reason they do not raise particularly novel ethical issues. In contrast with this, it will be pointed out that novel and challenging prospective responsibility issues may emerge in connection with harmful events caused by normal robot behaviours. This point will be illustrated here in connection with the rehabilitation robot Lokomat.

  16. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures.

    Directory of Open Access Journals (Sweden)

    Thierry Chaminade

    2010-07-01

    Full Text Available The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents.Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted.Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance.Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions.Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.

  17. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures.

    Science.gov (United States)

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-07-21

    The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.

  18. Towards safe robots approaching Asimov’s 1st law

    CERN Document Server

    Haddadin, Sami

    2014-01-01

    The vision of seamless human-robot interaction in our everyday life that allows for tight cooperation between human and robot has not become reality yet. However, the recent increase in technology maturity finally made it possible to realize systems of high integration, advanced sensorial capabilities and enhanced power to cross this barrier and merge living spaces of humans and robot workspaces to at least a certain extent. Together with the increasing industrial effort to realize first commercial service robotics products this makes it necessary to properly address one of the most fundamental questions of Human-Robot Interaction: How to ensure safety in human-robot coexistence? In this authoritative monograph, the essential question about the necessary requirements for a safe robot is addressed in depth and from various perspectives. The approach taken in this book focuses on the biomechanical level of injury assessment, addresses the physical evaluation of robot-human impacts, and isolates the major factor...

  19. Moving android: on social robots and body-in-interaction.

    Science.gov (United States)

    Alac, Morana

    2009-08-01

    Social robotics studies embodied technologies designed for social interaction. This paper examines the implied idea of embodiment using as data a sequence in which practitioners of social robotics are involved in designing a robot's movement. The moments of learning and work in the laboratory enact the social body as material, dynamic, and multiparty: the body-in-interaction. In describing subject-object reconfigurations, the paper explores how the well-known ideas of extending the body with instruments can be applied to a technology designed to function as our surrogate.

  20. Sex Robots: Between Human and Artificial

    OpenAIRE

    Richardson, Kathleen

    2017-01-01

    Despite a surplus of human beings in the world, new estimates total 7 and a half billion, we appear to be at the start of an attachment crisis - a crisis in how human beings make intimate relationships. Enter the sex robots, built out of the bodies of sex dolls to help humans, particularly males escape their inability to connect. What does the rise of sex robots tell us about the way that women and girls are imagined, are they persons or property? And to what extent is porn, prostitution and ...

  1. Online Assessment of Human-Robot Interaction for Hybrid Control of Walking

    Directory of Open Access Journals (Sweden)

    Ana de-los-Reyes

    2011-12-01

    Full Text Available Restoration of walking ability of Spinal Cord Injury subjects can be achieved by different approaches, as the use of robotic exoskeletons or electrical stimulation of the user’s muscles. The combined (hybrid approach has the potential to provide a solution to the drawback of each approach. Specific challenges must be addressed with specific sensory systems and control strategies. In this paper we present a system and a procedure to estimate muscle fatigue from online physical interaction assessment to provide hybrid control of walking, regarding the performances of the muscles under stimulation.

  2. Transferring human impedance regulation skills to robots

    CERN Document Server

    Ajoudani, Arash

    2016-01-01

    This book introduces novel thinking and techniques to the control of robotic manipulation. In particular, the concept of teleimpedance control as an alternative method to bilateral force-reflecting teleoperation control for robotic manipulation is introduced. In teleimpedance control, a compound reference command is sent to the slave robot including both the desired motion trajectory and impedance profile, which are then realized by the remote controller. This concept forms a basis for the development of the controllers for a robotic arm, a dual-arm setup, a synergy-driven robotic hand, and a compliant exoskeleton for improved interaction performance.

  3. Effect of human-robot interaction on muscular synergies on healthy people and post-stroke chronic patients.

    Science.gov (United States)

    Scano, A; Chiavenna, A; Caimmi, M; Malosio, M; Tosatti, L M; Molteni, F

    2017-07-01

    Robot-assisted training is a widely used technique to promote motor re-learning on post-stroke patients that suffer from motor impairment. While it is commonly accepted that robot-based therapies are potentially helpful, strong insights about their efficacy are still lacking. The motor re-learning process may act on muscular synergies, which are groups of co-activating muscles that, being controlled as a synergic group, allow simplifying the problem of motor control. In fact, by coordinating a reduced amount of neural signals, complex motor patterns can be elicited. This paper aims at analyzing the effects of robot assistance during 3D-reaching movements in the framework of muscular synergies. 5 healthy people and 3 neurological patients performed free and robot-assisted reaching movements at 2 different speeds (slow and quasi-physiological). EMG recordings were used to extract muscular synergies. Results indicate that the interaction with the robot very slightly alters healthy people patterns but, on the contrary, it may promote the emergency of physiological-like synergies on neurological patients.

  4. The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction

    Science.gov (United States)

    2009-03-01

    distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita

  5. Application of autonomous robotics to surveillance of waste storage containers for radioactive surface contamination

    International Nuclear Information System (INIS)

    Sweeney, F.J.; Beckerman, M.; Butler, P.L.; Jones, J.P.; Reister, D.B.

    1991-01-01

    This paper describes a proof-of-principal demonstration performed with the HERMIES-III mobile robot to automate the inspection of waste storage drums for radioactive surface contamination and thereby reduce the human burden of operating a robot and worker exposure to potentially hazardous environments. Software and hardware for the demonstration were developed by a team consisting of Oak Ridge National Laboratory, and the Universities of Florida, Michigan, Tennessee, and Texas. Robot navigation, machine vision, manipulator control, parallel processing and human-machine interface techniques developed by the team were demonstrated utilizing advanced computer architectures. The demonstration consists of over 100,000 lines of computer code executing on nine computers

  6. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  7. Social interaction enhances motor resonance for observed human actions.

    Science.gov (United States)

    Hogeveen, Jeremy; Obhi, Sukhvinder S

    2012-04-25

    Understanding the neural basis of social behavior has become an important goal for cognitive neuroscience and a key aim is to link neural processes observed in the laboratory to more naturalistic social behaviors in real-world contexts. Although it is accepted that mirror mechanisms contribute to the occurrence of motor resonance (MR) and are common to action execution, observation, and imitation, questions remain about mirror (and MR) involvement in real social behavior and in processing nonhuman actions. To determine whether social interaction primes the MR system, groups of participants engaged or did not engage in a social interaction before observing human or robotic actions. During observation, MR was assessed via motor-evoked potentials elicited with transcranial magnetic stimulation. Compared with participants who did not engage in a prior social interaction, participants who engaged in the social interaction showed a significant increase in MR for human actions. In contrast, social interaction did not increase MR for robot actions. Thus, naturalistic social interaction and laboratory action observation tasks appear to involve common MR mechanisms, and recent experience tunes the system to particular agent types.

  8. Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard; Bøgh, Simon; Ceballos, Iker

    Collaborative industrial robots are creating new opportunities for collaboration between humans and robots in shared workspaces. In order for such collaboration to be efficient, robots - as well as humans - need to have an understanding of the other's intentions and current ongoing action....... In this work, we propose a method for learning, classifying, and predicting actions taken by a human. Our proposed method is based on the human skeleton model from a Kinect. For demonstration of our approach we chose a typical pick-and-place scenario. Therefore, only arms and upper body are considered......; in total 15 joints. Based on trajectories on these joints, different classes of motion are separated using partitioning around medoids (PAM). Subsequently, SVM is used to train the classes to form a library of human motions. The approach allows run-time detection of when a new motion has been initiated...

  9. ISS Robotic Student Programming

    Science.gov (United States)

    Barlow, J.; Benavides, J.; Hanson, R.; Cortez, J.; Le Vasseur, D.; Soloway, D.; Oyadomari, K.

    2016-01-01

    The SPHERES facility is a set of three free-flying satellites launched in 2006. In addition to scientists and engineering, middle- and high-school students program the SPHERES during the annual Zero Robotics programming competition. Zero Robotics conducts virtual competitions via simulator and on SPHERES aboard the ISS, with students doing the programming. A web interface allows teams to submit code, receive results, collaborate, and compete in simulator-based initial rounds and semi-final rounds. The final round of each competition is conducted with SPHERES aboard the ISS. At the end of 2017 a new robotic platform called Astrobee will launch, providing new game elements and new ground support for even more student interaction.

  10. Springer handbook of robotics

    CERN Document Server

    Khatib, Oussama

    2016-01-01

    The second edition of this handbook provides a state-of-the-art cover view on the various aspects in the rapidly developing field of robotics. Reaching for the human frontier, robotics is vigorously engaged in the growing challenges of new emerging domains. Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives. The credible prospect of practical robots among humans is the result of the scientific endeavour of a half a century of robotic developments that established robotics as a modern scientific discipline. The ongoing vibrant expansion and strong growth of the field during the last decade has fueled this second edition of the Springer Handbook of Robotics. The first edition of the handbook soon became a landmark in robotics publishing and won the American Association of Publishers PROSE Award for Excellence in Physical Sciences & Mathematics as well as the organization’s Award for Engineering & Technology. The second edition o...

  11. Natural Tasking of Robots Based on Human Interaction Cues

    Science.gov (United States)

    2005-06-01

    MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching

  12. Using team cognitive work analysis to reveal healthcare team interactions in a birthing unit.

    Science.gov (United States)

    Ashoori, Maryam; Burns, Catherine M; d'Entremont, Barbara; Momtahan, Kathryn

    2014-01-01

    Cognitive work analysis (CWA) as an analytical approach for examining complex sociotechnical systems has shown success in modelling the work of single operators. The CWA approach incorporates social and team interactions, but a more explicit analysis of team aspects can reveal more information for systems design. In this paper, Team CWA is explored to understand teamwork within a birthing unit at a hospital. Team CWA models are derived from theories and models of teamwork and leverage the existing CWA approaches to analyse team interactions. Team CWA is explained and contrasted with prior approaches to CWA. Team CWA does not replace CWA, but supplements traditional CWA to more easily reveal team information. As a result, Team CWA may be a useful approach to enhance CWA in complex environments where effective teamwork is required. This paper looks at ways of analysing cognitive work in healthcare teams. Team Cognitive Work Analysis, when used to supplement traditional Cognitive Work Analysis, revealed more team information than traditional Cognitive Work Analysis. Team Cognitive Work Analysis should be considered when studying teams.

  13. Using team cognitive work analysis to reveal healthcare team interactions in a birthing unit

    Science.gov (United States)

    Ashoori, Maryam; Burns, Catherine M.; d'Entremont, Barbara; Momtahan, Kathryn

    2014-01-01

    Cognitive work analysis (CWA) as an analytical approach for examining complex sociotechnical systems has shown success in modelling the work of single operators. The CWA approach incorporates social and team interactions, but a more explicit analysis of team aspects can reveal more information for systems design. In this paper, Team CWA is explored to understand teamwork within a birthing unit at a hospital. Team CWA models are derived from theories and models of teamworkand leverage the existing CWA approaches to analyse team interactions. Team CWA is explained and contrasted with prior approaches to CWA. Team CWA does not replace CWA, but supplements traditional CWA to more easily reveal team information. As a result, Team CWA may be a useful approach to enhance CWA in complex environments where effective teamwork is required. Practitioner Summary: This paper looks at ways of analysing cognitive work in healthcare teams. Team Cognitive Work Analysis, when used to supplement traditional Cognitive Work Analysis, revealed more team information than traditional Cognitive Work Analysis. Team Cognitive Work Analysis should be considered when studying teams PMID:24837514

  14. Safe Human-Robot Cooperation in an Industrial Environment

    Directory of Open Access Journals (Sweden)

    Nicola Pedrocchi

    2013-01-01

    Full Text Available The standard EN ISO10218 is fostering the implementation of hybrid production systems, i.e., production systems characterized by a close relationship among human operators and robots in cooperative tasks. Human-robot hybrid systems could have a big economic benefit in small and medium sized production, even if this new paradigm introduces mandatory, challenging safety aspects. Among various requirements for collaborative workspaces, safety-assurance involves two different application layers; the algorithms enabling safe space-sharing between humans and robots and the enabling technologies allowing acquisition data from sensor fusion and environmental data analysing. This paper addresses both the problems: a collision avoidance strategy allowing on-line re-planning of robot motion and a safe network of unsafe devices as a suggested infrastructure for functional safety achievement.

  15. Designing collective behavior in a termite-inspired robot construction team.

    Science.gov (United States)

    Werfel, Justin; Petersen, Kirstin; Nagpal, Radhika

    2014-02-14

    Complex systems are characterized by many independent components whose low-level actions produce collective high-level results. Predicting high-level results given low-level rules is a key open challenge; the inverse problem, finding low-level rules that give specific outcomes, is in general still less understood. We present a multi-agent construction system inspired by mound-building termites, solving such an inverse problem. A user specifies a desired structure, and the system automatically generates low-level rules for independent climbing robots that guarantee production of that structure. Robots use only local sensing and coordinate their activity via the shared environment. We demonstrate the approach via a physical realization with three autonomous climbing robots limited to onboard sensing. This work advances the aim of engineering complex systems that achieve specific human-designed goals.

  16. Robots show us how to teach them: feedback from robots shapes tutoring behavior during action learning.

    Science.gov (United States)

    Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J; Wrede, Britta

    2014-01-01

    Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction.

  17. Acceptance of an assistive robot in older adults: a mixed-method study of human-robot interaction over a 1-month period in the Living Lab setting.

    Science.gov (United States)

    Wu, Ya-Huei; Wrobel, Jérémy; Cornuet, Mélanie; Kerhervé, Hélène; Damnée, Souad; Rigaud, Anne-Sophie

    2014-01-01

    There is growing interest in investigating acceptance of robots, which are increasingly being proposed as one form of assistive technology to support older adults, maintain their independence, and enhance their well-being. In the present study, we aimed to observe robot-acceptance in older adults, particularly subsequent to a 1-month direct experience with a robot. Six older adults with mild cognitive impairment (MCI) and five cognitively intact healthy (CIH) older adults were recruited. Participants interacted with an assistive robot in the Living Lab once a week for 4 weeks. After being shown how to use the robot, participants performed tasks to simulate robot use in everyday life. Mixed methods, comprising a robot-acceptance questionnaire, semistructured interviews, usability-performance measures, and a focus group, were used. Both CIH and MCI subjects were able to learn how to use the robot. However, MCI subjects needed more time to perform tasks after a 1-week period of not using the robot. Both groups rated similarly on the robot-acceptance questionnaire. They showed low intention to use the robot, as well as negative attitudes toward and negative images of this device. They did not perceive it as useful in their daily life. However, they found it easy to use, amusing, and not threatening. In addition, social influence was perceived as powerful on robot adoption. Direct experience with the robot did not change the way the participants rated robots in their acceptance questionnaire. We identified several barriers to robot-acceptance, including older adults' uneasiness with technology, feeling of stigmatization, and ethical/societal issues associated with robot use. It is important to destigmatize images of assistive robots to facilitate their acceptance. Universal design aiming to increase the market for and production of products that are usable by everyone (to the greatest extent possible) might help to destigmatize assistive devices.

  18. The human hand as an inspiration for robot hand development

    CERN Document Server

    Santos, Veronica

    2014-01-01

    “The Human Hand as an Inspiration for Robot Hand Development” presents an edited collection of authoritative contributions in the area of robot hands. The results described in the volume are expected to lead to more robust, dependable, and inexpensive distributed systems such as those endowed with complex and advanced sensing, actuation, computation, and communication capabilities. The twenty-four chapters discuss the field of robotic grasping and manipulation viewed in light of the human hand’s capabilities and push the state-of-the-art in robot hand design and control. Topics discussed include human hand biomechanics, neural control, sensory feedback and perception, and robotic grasp and manipulation. This book will be useful for researchers from diverse areas such as robotics, biomechanics, neuroscience, and anthropologists.

  19. Investigation of the Impedance Characteristic of Human Arm for Development of Robots to Cooperate with Humans

    Science.gov (United States)

    Rahman, Md. Mozasser; Ikeura, Ryojun; Mizutani, Kazuki

    In the near future many aspects of our lives will be encompassed by tasks performed in cooperation with robots. The application of robots in home automation, agricultural production and medical operations etc. will be indispensable. As a result robots need to be made human-friendly and to execute tasks in cooperation with humans. Control systems for such robots should be designed to work imitating human characteristics. In this study, we have tried to achieve these goals by means of controlling a simple one degree-of-freedom cooperative robot. Firstly, the impedance characteristic of the human arm in a cooperative task is investigated. Then, this characteristic is implemented to control a robot in order to perform cooperative task with humans. A human followed the motion of an object, which is moved through desired trajectories. The motion is actuated by the linear motor of the one degree-of-freedom robot system. Trajectories used in the experiments of this method were minimum jerk (the rate of change of acceleration) trajectory, which was found during human and human cooperative task and optimum for muscle movement. As the muscle is mechanically analogous to a spring-damper system, a simple second-order equation is used as models for the arm dynamics. In the model, we considered mass, stiffness and damping factor. Impedance parameter is calculated from the position and force data obtained from the experiments and based on the “Estimation of Parametric Model”. Investigated impedance characteristic of human arm is then implemented to control a robot, which performed cooperative task with human. It is observed that the proposed control methodology has given human like movements to the robot for cooperating with human.

  20. Robots as Imagined in the Television Series Humans.

    Science.gov (United States)

    Wicclair, Mark R

    2018-07-01

    Humans is a science fiction television series set in what appears to be present-day London. What makes it science fiction is that in London and worldwide, there are robots that look like humans and can mimic human behavior. The series raises several important ethical and philosophical questions about artificial intelligence and robotics, which should be of interest to bioethicists.

  1. Comparison of Localization Methods for a Robot Soccer Team

    Directory of Open Access Journals (Sweden)

    H. Levent Akın

    2008-11-01

    Full Text Available In this work, several localization algorithms that are designed and implemented for Cerberus'05 Robot Soccer Team are analyzed and compared. These algorithms are used for global localization of autonomous mobile agents in the robotic soccer domain, to overcome the uncertainty in the sensors, environment and the motion model. The algorithms are Reverse Monte Carlo Localization (R-MCL, Simple Localization (S-Loc and Sensor Resetting Localization (SRL. R-MCL is a hybrid method based on both Markov Localization (ML and Monte Carlo Localization (MCL where the ML module finds the region where the robot should be and MCL predicts the geometrical location with high precision by selecting samples in this region. S-Loc is another localization method where just one sample per percept is drawn, for global localization. Within this method another novel method My Environment (ME is designed to hold the history and overcome the lack of information due to the drastically decrease in the number of samples in S-Loc. ME together with S-Loc is used in the Technical Challenges in Robocup 2005 and play an important role in ranking the First Place in the Challenges. In this work, these methods together with SRL, which is a widely used successful localization algorithm, are tested with both offline and real-time tests. First they are tested on a challenging data set that is used by many researches and compared in terms of error rate against different levels of noise, and sparsity. Besides time required recovering from kidnapping and speed of the methods are tested and compared. Then their performances are tested with real-time tests with scenarios like the ones in the Technical Challenges in ROBOCUP. The main aim is to find the best method which is very robust and fast and requires less computational power and memory compared to similar approaches and is accurate enough for high level decision making which is vital for robot soccer.

  2. Comparison of Localization Methods for a Robot Soccer Team

    Directory of Open Access Journals (Sweden)

    Hatice Kose

    2006-12-01

    Full Text Available In this work, several localization algorithms that are designed and implemented for Cerberus'05 Robot Soccer Team are analyzed and compared. These algorithms are used for global localization of autonomous mobile agents in the robotic soccer domain, to overcome the uncertainty in the sensors, environment and the motion model. The algorithms are Reverse Monte Carlo Localization (R-MCL, Simple Localization (S-Loc and Sensor Resetting Localization (SRL. R-MCL is a hybrid method based on both Markov Localization (ML and Monte Carlo Localization (MCL where the ML module finds the region where the robot should be and MCL predicts the geometrical location with high precision by selecting samples in this region. S-Loc is another localization method where just one sample per percept is drawn, for global localization. Within this method another novel method My Environment (ME is designed to hold the history and overcome the lack of information due to the drastically decrease in the number of samples in S-Loc. ME together with S-Loc is used in the Technical Challenges in Robocup 2005 and play an important role in ranking the First Place in the Challenges. In this work, these methods together with SRL, which is a widely used successful localization algorithm, are tested with both offline and real-time tests. First they are tested on a challenging data set that is used by many researches and compared in terms of error rate against different levels of noise, and sparsity. Besides time required recovering from kidnapping and speed of the methods are tested and compared. Then their performances are tested with real-time tests with scenarios like the ones in the Technical Challenges in ROBOCUP. The main aim is to find the best method which is very robust and fast and requires less computational power and memory compared to similar approaches and is accurate enough for high level decision making which is vital for robot soccer.

  3. Multi-Robot Remote Interaction with FS-MAS

    Directory of Open Access Journals (Sweden)

    Yunliang Jiang

    2013-02-01

    Full Text Available The need to reduce bandwidth, improve productivity, autonomy and the scalability in multi-robot teleoperation has been recognized for a long time. In this article we propose a novel finite state machine mobile agent based on the network interaction service model, namely FS-MAS. This model consists of three finite state machines, namely the Finite State Mobile Agent (FS-Agent, which is the basic service module. The Service Content Finite State Machine (Content-FS, using the XML language to define workflow, to describe service content and service computation process. The Mobile Agent computation model Finite State Machine (MACM-FS, used to describe the service implementation. Finally, we apply this service model to the multi-robot system, the initial realization completing complex tasks in the form of multi-robot scheduling. This demonstrates that the robot has greatly improved intelligence, and provides a wide solution space for critical issues such as task division, rational and efficient use of resource and multi-robot collaboration.

  4. Robot-mediated interviews--how effective is a humanoid robot as a tool for interviewing young children?

    Directory of Open Access Journals (Sweden)

    Luke Jai Wood

    Full Text Available Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children's responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children's behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an 'interviewer' for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications.

  5. Robot and Human Surface Operations on Solar System Bodies

    Science.gov (United States)

    Weisbin, C. R.; Easter, R.; Rodriguez, G.

    2001-01-01

    This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.

  6. Feasibility of interactive gesture control of a robotic microscope

    Directory of Open Access Journals (Sweden)

    Antoni Sven-Thomas

    2015-09-01

    Full Text Available Robotic devices become increasingly available in the clinics. One example are motorized surgical microscopes. While there are different scenarios on how to use the devices for autonomous tasks, simple and reliable interaction with the device is a key for acceptance by surgeons. We study, how gesture tracking can be integrated within the setup of a robotic microscope. In our setup, a Leap Motion Controller is used to track hand motion and adjust the field of view accordingly. We demonstrate with a survey that moving the field of view over a specified course is possible even for untrained subjects. Our results indicate that touch-less interaction with robots carrying small, near field gesture sensors is feasible and can be of use in clinical scenarios, where robotic devices are used in direct proximity of patient and physicians.

  7. Rehabilitation exoskeletal robotics. The promise of an emerging field.

    Science.gov (United States)

    Pons, José L

    2010-01-01

    Exoskeletons are wearable robots exhibiting a close cognitive and physical interaction with the human user. These are rigid robotic exoskeletal structures that typically operate alongside human limbs. Scientific and technological work on exoskeletons began in the early 1960s but have only recently been applied to rehabilitation and functional substitution in patients suffering from motor disorders. Key topics for further development of exoskeletons in rehabilitation scenarios include the need for robust human-robot multimodal cognitive interaction, safe and dependable physical interaction, true wearability and portability, and user aspects such as acceptance and usability. This discussion provides an overview of these aspects and draws conclusions regarding potential future research directions in robotic exoskeletons.

  8. Human-Like Room Segmentation for Domestic Cleaning Robots

    Directory of Open Access Journals (Sweden)

    David Fleer

    2017-11-01

    Full Text Available Autonomous mobile robots have recently become a popular solution for automating cleaning tasks. In one application, the robot cleans a floor space by traversing and covering it completely. While fulfilling its task, such a robot may create a map of its surroundings. For domestic indoor environments, these maps often consist of rooms connected by passageways. Segmenting the map into these rooms has several uses, such as hierarchical planning of cleaning runs by the robot, or the definition of cleaning plans by the user. Especially in the latter application, the robot-generated room segmentation should match the human understanding of rooms. Here, we present a novel method that solves this problem for the graph of a topo-metric map: first, a classifier identifies those graph edges that cross a border between rooms. This classifier utilizes data from multiple robot sensors, such as obstacle measurements and camera images. Next, we attempt to segment the map at these room–border edges using graph clustering. By training the classifier on user-annotated data, this produces a human-like room segmentation. We optimize and test our method on numerous realistic maps generated by our cleaning-robot prototype and its simulated version. Overall, we find that our method produces more human-like room segmentations compared to mere graph clustering. However, unusual room borders that differ from the training data remain a challenge.

  9. Fundamentals of ergonomic exoskeleton robots

    OpenAIRE

    Schiele, A.

    2008-01-01

    This thesis is the first to provide the fundamentals of ergonomic exoskeleton design. The fundamental theory as well as technology necessary to analyze and develop ergonomic wearable robots interacting with humans is established and validated by experiments and prototypes. The fundamentals are (1) a new theoretical framework for analyzing physical human robot interaction (pHRI) with exoskeletons, and (2) a clear set of design rules of how to build wearable, portable exoskeletons to easily and...

  10. Artificial companions: empathy and vulnerability mirroring in human-robot relations

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2010-01-01

    Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends (among other things) on the robot’s capacity to be a recipient of human empathy, and

  11. Modeling Mixed Groups of Humans and Robots with Reflexive Game Theory

    Science.gov (United States)

    Tarasenko, Sergey

    The Reflexive Game Theory is based on decision-making principles similar to the ones used by humans. This theory considers groups of subjects and allows to predict which action from the set each subject in the group will choose. It is possible to influence subject's decision in a way that he will make a particular choice. The purpose of this study is to illustrate how robots can refrain humans from risky actions. To determine the risky actions, the Asimov's Three Laws of robotics are employed. By fusing the RGT's power to convince humans on the mental level with Asimov's Laws' safety, we illustrate how robots in the mixed groups of humans and robots can influence on human subjects in order to refrain humans from risky actions. We suggest that this fusion has a potential to device human-like motor behaving and looking robots with the human-like decision-making algorithms.

  12. Appeal and Perceived Naturalness of a Soft Robotic Tentacle

    DEFF Research Database (Denmark)

    Jørgensen, Jonas

    2018-01-01

    Soft robotics technology has been proposed for a number of applications that involve human-robot interaction. This study investigates how a silicone-based pneumatically actuated soft robotic tentacle is perceived in interaction. Quantitative and qualitative data was gathered from questionnaires (N...

  13. Cognitive neuroscience robotics B analytic approaches to human understanding

    CERN Document Server

    Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi

    2016-01-01

    Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume B describes to what extent cognitive science and neuroscience have revealed the underlying mechanism of human cognition, and investigates how development of neural engineering and advances in other disciplines could lead to deep understanding of human cognition.

  14. Traveling Robots and Their Cultural Baggage

    DEFF Research Database (Denmark)

    Blond, Lasse

    When social robots are imported from Asia to Europe they bring along with them a cultural luggage consisting of foreign sociotechnical imaginary. The effort to adopt the robot Silbot to Nordic social services exposed unfamiliar and cultural-dependent views of care, cognition, health and human...... nature. Studying Silbot in “the wild” highlighted these issues as well as the human-robot interaction and the adaptation of the robot to real life praxis. The importance of comprehending robots as parts of sociotechnical ensembles is emphasized as well as the observance of how robots are shaped...... by the cultural context in the recipient countries....

  15. On the role of exchange of power and information signals in control and stability of the human-robot interaction

    Science.gov (United States)

    Kazerooni, H.

    1991-01-01

    A human's ability to perform physical tasks is limited, not only by his intelligence, but by his physical strength. If, in an appropriate environment, a machine's mechanical power is closely integrated with a human arm's mechanical power under the control of the human intellect, the resulting system will be superior to a loosely integrated combination of a human and a fully automated robot. Therefore, we must develop a fundamental solution to the problem of 'extending' human mechanical power. The work presented here defines 'extenders' as a class of robot manipulators worn by humans to increase human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. The human, in physical contact with the extender, exchanges power and information signals with the extender. The aim is to determine the fundamental building blocks of an intelligent controller, a controller which allows interaction between humans and a broad class of computer-controlled machines via simultaneous exchange of both power and information signals. The prevalent trend in automation has been to physically separate the human from the machine so the human must always send information signals via an intermediary device (e.g., joystick, pushbutton, light switch). Extenders, however are perfect examples of self-powered machines that are built and controlled for the optimal exchange of power and information signals with humans. The human wearing the extender is in physical contact with the machine, so power transfer is unavoidable and information signals from the human help to control the machine. Commands are transferred to the extender via the contact forces and the EMG signals between the wearer and the extender. The extender augments human motor ability without accepting any explicit commands: it accepts the EMG signals and the contact force between the person's arm and the extender, and the extender 'translates' them into a desired position. In

  16. FIRST robots compete

    Science.gov (United States)

    2000-01-01

    FIRST teams and their robots work to go through the right motions at the FIRST competition. Students from all over the country are at the KSC Visitor Complex for the FIRST (For Inspiration and Recognition of Science and Technology) Southeast Regional competition March 9-11 in the Rocket Garden. Teams of high school students are testing the limits of their imagination using robots they have designed, with the support of business and engineering professionals and corporate sponsors, to compete in a technological battle against other schools' robots. Of the 30 high school teams competing, 16 are Florida teams co-sponsored by NASA and KSC contractors. Local high schools participating are Astronaut, Bayside, Cocoa Beach, Eau Gallie, Melbourne, Melbourne Central Catholic, Palm Bay, Rockledge, Satellite, and Titusville.

  17. Situated dialog in speech-based human-computer interaction

    CERN Document Server

    Raux, Antoine; Lane, Ian; Misu, Teruhisa

    2016-01-01

    This book provides a survey of the state-of-the-art in the practical implementation of Spoken Dialog Systems for applications in everyday settings. It includes contributions on key topics in situated dialog interaction from a number of leading researchers and offers a broad spectrum of perspectives on research and development in the area. In particular, it presents applications in robotics, knowledge access and communication and covers the following topics: dialog for interacting with robots; language understanding and generation; dialog architectures and modeling; core technologies; and the analysis of human discourse and interaction. The contributions are adapted and expanded contributions from the 2014 International Workshop on Spoken Dialog Systems (IWSDS 2014), where researchers and developers from industry and academia alike met to discuss and compare their implementation experiences, analyses and empirical findings.

  18. Distributed Robotics Education

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2011-01-01

    Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept of a distribu......Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept...... to be changed, related to multirobot control and human-robot interaction control from virtual to physical representation. The proposed system is valuable for bringing a vast number of issues into education – such as parallel programming, distribution, communication protocols, master dependency, connectivity...

  19. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot.

    Science.gov (United States)

    Alexandrov, Alexei V; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free , scalar equations. This paper investigates whether the EM alternative shows "real-world robustness" against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive ("voluntary") movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  20. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  1. Essential technologies for developing human and robot collaborative system

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1997-10-01

    In this study, we aim to develop a concept of new robot system, i.e., 'human and robot collaborative system', for the patrol of nuclear power plants. This paper deals with the two essential technologies developed for the system. One is the autonomous navigation program with human intervention function which is indispensable for human and robot collaboration. The other is the position estimation method by using gyroscope and TV image to make the estimation accuracy much higher for safe navigation. Feasibility of the position estimation method is evaluated by experiment and numerical simulation. (author)

  2. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    OpenAIRE

    BIROUAS Flaviu Ionut; NILGESZ Arnold

    2016-01-01

    This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  3. Fish-robot interactions in a free-swimming environment: Effects of speed and configuration of robots on live fish

    Science.gov (United States)

    Butail, Sachit; Polverino, Giovanni; Phamduy, Paul; Del Sette, Fausto; Porfiri, Maurizio

    2014-03-01

    We explore fish-robot interactions in a comprehensive set of experiments designed to highlight the effects of speed and configuration of bioinspired robots on live zebrafish. The robot design and movement is inspired by salient features of attraction in zebrafish and includes enhanced coloration, aspect ratio of a fertile female, and carangiform/subcarangiformlocomotion. The robots are autonomously controlled to swim in circular trajectories in the presence of live fish. Our results indicate that robot configuration significantly affects both the fish distance to the robots and the time spent near them.

  4. What makes robots social? : A user’s perspective on characteristics for social human-robot interaction

    NARCIS (Netherlands)

    de Graaf, M.M.A.; Ben Allouch, Soumaya

    2015-01-01

    A common description of a social robot is for it to be capable of communicating in a humanlike manner. However, a description of what communicating in a ‘humanlike manner’ means often remains unspecified. This paper provides a set of social behaviors and certain specific features social robots

  5. Robotics-based synthesis of human motion

    KAUST Repository

    Khatib, O.; Demircan, E.; De Sapio, V.; Sentis, L.; Besier, T.; Delp, S.

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  6. Robotics-based synthesis of human motion

    KAUST Repository

    Khatib, O.

    2009-05-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  7. The New Robotics-towards human-centered machines.

    Science.gov (United States)

    Schaal, Stefan

    2007-07-01

    Research in robotics has moved away from its primary focus on industrial applications. The New Robotics is a vision that has been developed in past years by our own university and many other national and international research institutions and addresses how increasingly more human-like robots can live among us and take over tasks where our current society has shortcomings. Elder care, physical therapy, child education, search and rescue, and general assistance in daily life situations are some of the examples that will benefit from the New Robotics in the near future. With these goals in mind, research for the New Robotics has to embrace a broad interdisciplinary approach, ranging from traditional mathematical issues of robotics to novel issues in psychology, neuroscience, and ethics. This paper outlines some of the important research problems that will need to be resolved to make the New Robotics a reality.

  8. Optimal Modality Selection for Cooperative Human-Robot Task Completion.

    Science.gov (United States)

    Jacob, Mithun George; Wachs, Juan P

    2016-12-01

    Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p human-robot collision) and the differences in the lexicons are analyzed.

  9. Robot Choreography

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Heath, Damith

    2016-01-01

    We propose a robust framework for combining performance paradigms with human robot interaction (HRI) research. Following an analysis of several case studies that combine the performing arts with HRI experiments, we propose a methodology and “best practices” for implementing choreography and other...... performance paradigms in HRI experiments. Case studies include experiments conducted in laboratory settings, “in the wild”, and live performance settings. We consider the technical and artistic challenges of designing and staging robots alongside humans in these various settings, and discuss how to combine...

  10. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Thomas Mergner

    2017-04-01

    Full Text Available Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary” movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  11. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    Directory of Open Access Journals (Sweden)

    BIROUAS Flaviu Ionut

    2016-11-01

    Full Text Available This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  12. Well done, Robot! : the importance of praise and presence in human-robot collaboration

    NARCIS (Netherlands)

    Reichenbach, J.; Bartneck, C.; Carpenter, J.; Dautenhahn, K.

    2006-01-01

    This study reports on an experiment in which participants had to collaborate with either another human or a robot (partner). The robot would either be present in the room or only be represented on the participants' computer screen (presence). Furthermore, the participants' partner would either make

  13. ROSAPL: towards a heterogeneous multi‐robot system and Human interaction framework

    OpenAIRE

    Boronat Roselló, Emili

    2014-01-01

    The appearance of numerous robotic frameworks and middleware has provided researchers with reliable hardware and software units avoiding the need of developing ad-hoc platforms and focus their work on how improve the robots' high-level capabilities and behaviours. Despite this none of these are facilitating frameworks considering social capabilities as a factor in robots design. In a world that everyday seems more and more connected, with the slow but steady advance of th...

  14. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  15. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  16. Towards Using a Generic Robot as Training Partner

    DEFF Research Database (Denmark)

    Sørensen, Anders Stengaard; Savarimuthu, Thiusius Rajeeth; Nielsen, Jacob

    2014-01-01

    In this paper, we demonstrate how a generic industrial robot can be used as a training partner, for upper limb training. The motion path and human/robot interaction of a non-generic upper-arm training robot is transferred to a generic industrial robot arm, and we demonstrate that the robot arm can...... implement the same type of interaction, but can expand the training regime to include both upper arm and shoulder training. We compare the generic robot to two affordable but custom-built training robots, and outline interesting directions for future work based on these training robots....

  17. A Motion System for Social and Animated Robots

    Directory of Open Access Journals (Sweden)

    Jelle Saldien

    2014-05-01

    Full Text Available This paper presents an innovative motion system that is used to control the motions and animations of a social robot. The social robot Probo is used to study Human-Robot Interactions (HRI, with a special focus on Robot Assisted Therapy (RAT. When used for therapy it is important that a social robot is able to create an “illusion of life” so as to become a believable character that can communicate with humans. The design of the motion system in this paper is based on insights from the animation industry. It combines operator-controlled animations with low-level autonomous reactions such as attention and emotional state. The motion system has a Combination Engine, which combines motion commands that are triggered by a human operator with motions that originate from different units of the cognitive control architecture of the robot. This results in an interactive robot that seems alive and has a certain degree of “likeability”. The Godspeed Questionnaire Series is used to evaluate the animacy and likeability of the robot in China, Romania and Belgium.

  18. Trends in control and decision-making for human-robot collaboration systems

    CERN Document Server

    Zhang, Fumin

    2017-01-01

    This book provides an overview of recent research developments in the automation and control of robotic systems that collaborate with humans. A measure of human collaboration being necessary for the optimal operation of any robotic system, the contributors exploit a broad selection of such systems to demonstrate the importance of the subject, particularly where the environment is prone to uncertainty or complexity. They show how such human strengths as high-level decision-making, flexibility, and dexterity can be combined with robotic precision, and ability to perform task repetitively or in a dangerous environment. The book focuses on quantitative methods and control design for guaranteed robot performance and balanced human experience. Its contributions develop and expand upon material presented at various international conferences. They are organized into three parts covering: one-human–one-robot collaboration; one-human–multiple-robot collaboration; and human–swarm collaboration. Individual topic ar...

  19. ROBOT LEARNING OF OBJECT MANIPULATION TASK ACTIONS FROM HUMAN DEMONSTRATIONS

    Directory of Open Access Journals (Sweden)

    Maria Kyrarini

    2017-08-01

    Full Text Available Robot learning from demonstration is a method which enables robots to learn in a similar way as humans. In this paper, a framework that enables robots to learn from multiple human demonstrations via kinesthetic teaching is presented. The subject of learning is a high-level sequence of actions, as well as the low-level trajectories necessary to be followed by the robot to perform the object manipulation task. The multiple human demonstrations are recorded and only the most similar demonstrations are selected for robot learning. The high-level learning module identifies the sequence of actions of the demonstrated task. Using Dynamic Time Warping (DTW and Gaussian Mixture Model (GMM, the model of demonstrated trajectories is learned. The learned trajectory is generated by Gaussian mixture regression (GMR from the learned Gaussian mixture model.  In online working phase, the sequence of actions is identified and experimental results show that the robot performs the learned task successfully.

  20. A Framework for Interactive Teaching of Virtual Borders to Mobile Robots

    OpenAIRE

    Sprute, Dennis; Rasch, Robin; Tönnies, Klaus; König, Matthias

    2017-01-01

    The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot's workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framewo...

  1. Instrumented Compliant Wrist with Proximity and Contact Sensing for Close Robot Interaction Control

    Directory of Open Access Journals (Sweden)

    Pascal Laferrière

    2017-06-01

    Full Text Available Compliance has been exploited in various forms in robotic systems to allow rigid mechanisms to come into contact with fragile objects, or with complex shapes that cannot be accurately modeled. Force feedback control has been the classical approach for providing compliance in robotic systems. However, by integrating other forms of instrumentation with compliance into a single device, it is possible to extend close monitoring of nearby objects before and after contact occurs. As a result, safer and smoother robot control can be achieved both while approaching and while touching surfaces. This paper presents the design and extensive experimental evaluation of a versatile, lightweight, and low-cost instrumented compliant wrist mechanism which can be mounted on any rigid robotic manipulator in order to introduce a layer of compliance while providing the controller with extra sensing signals during close interaction with an object’s surface. Arrays of embedded range sensors provide real-time measurements on the position and orientation of surfaces, either located in proximity or in contact with the robot’s end-effector, which permits close guidance of its operation. Calibration procedures are formulated to overcome inter-sensor variability and achieve the highest available resolution. A versatile solution is created by embedding all signal processing, while wireless transmission connects the device to any industrial robot’s controller to support path control. Experimental work demonstrates the device’s physical compliance as well as the stability and accuracy of the device outputs. Primary applications of the proposed instrumented compliant wrist include smooth surface following in manufacturing, inspection, and safe human-robot interaction.

  2. When in Rome: the role of culture & context in adherence to robot recommendations

    NARCIS (Netherlands)

    Wang, L.; Rau, P.-L.P.; Evers, V.; Robinson, B.K.; Hinds, P.

    2010-01-01

    In this study, we sought to clarify the effects of users' cultural background and cultural context on human-robot team collaboration by investigating attitudes toward and the extent to which people changed their decisions based on the recommendations of a robot collaborator. We report the results of

  3. Investigating the effect of the reality gap on the human psychophysiological state in the context of human-swarm interaction

    Directory of Open Access Journals (Sweden)

    Gaëtan Podevijn

    2016-09-01

    Full Text Available The reality gap is the discrepancy between simulation and reality—the same behavioural algorithm results in different robot swarm behaviours in simulation and in reality (with real robots. In this paper, we study the effect of the reality gap on the psychophysiological reactions of humans interacting with a robot swarm. We compare the psychophysiological reactions of 28 participants interacting with a simulated robot swarm and with a real (non-simulated robot swarm. Our results show that a real robot swarm provokes stronger reactions in our participants than a simulated robot swarm. We also investigate how to mitigate the effect of the reality gap (i.e., how to diminish the difference in the psychophysiological reactions between reality and simulation by comparing psychophysiological reactions in simulation displayed on a computer screen and psychophysiological reactions in simulation displayed in virtual reality. Our results show that our participants tend to have stronger psychophysiological reactions in simulation displayed in virtual reality (suggesting a potential way of diminishing the effect of the reality gap.

  4. Design of robust robotic proxemic behaviour

    NARCIS (Netherlands)

    Torta, E.; Cuijpers, R.H.; Juola, J.F.; Pol, van der D.; Mutlu, B.; Bartneck, C.; Ham, J.R.C.; Evers, V.; Kanda, T.

    2011-01-01

    Personal robots that share the same space with humans need to be socially acceptable and effective as they interact with people. In this paper we focus our attention on the definition of a behaviour-based robotic architecture that, (1) allows the robot to navigate safely in a cluttered and

  5. Robots for use in autism research.

    Science.gov (United States)

    Scassellati, Brian; Admoni, Henny; Matarić, Maja

    2012-01-01

    Autism spectrum disorders are a group of lifelong disabilities that affect people's ability to communicate and to understand social cues. Research into applying robots as therapy tools has shown that robots seem to improve engagement and elicit novel social behaviors from people (particularly children and teenagers) with autism. Robot therapy for autism has been explored as one of the first application domains in the field of socially assistive robotics (SAR), which aims to develop robots that assist people with special needs through social interactions. In this review, we discuss the past decade's work in SAR systems designed for autism therapy by analyzing robot design decisions, human-robot interactions, and system evaluations. We conclude by discussing challenges and future trends for this young but rapidly developing research area.

  6. Project InterActions: A Multigenerational Robotic Learning Environment

    Science.gov (United States)

    Bers, Marina U.

    2007-12-01

    This paper presents Project InterActions, a series of 5-week workshops in which very young learners (4- to 7-year-old children) and their parents come together to build and program a personally meaningful robotic project in the context of a multigenerational robotics-based community of practice. The goal of these family workshops is to teach both parents and children about the mechanical and programming aspects involved in robotics, as well as to initiate them in a learning trajectory with and about technology. Results from this project address different ways in which parents and children learn together and provide insights into how to develop educational interventions that would educate parents, as well as children, in new domains of knowledge and skills such as robotics and new technologies.

  7. Movement Performance of Human-Robot Cooperation Control Based on EMG-Driven Hill-Type and Proportional Models for an Ankle Power-Assist Exoskeleton Robot.

    Science.gov (United States)

    Ao, Di; Song, Rong; Gao, JinWu

    2017-08-01

    Although the merits of electromyography (EMG)-based control of powered assistive systems have been certified, the factors that affect the performance of EMG-based human-robot cooperation, which are very important, have received little attention. This study investigates whether a more physiologically appropriate model could improve the performance of human-robot cooperation control for an ankle power-assist exoskeleton robot. To achieve the goal, an EMG-driven Hill-type neuromusculoskeletal model (HNM) and a linear proportional model (LPM) were developed and calibrated through maximum isometric voluntary dorsiflexion (MIVD). The two control models could estimate the real-time ankle joint torque, and HNM is more accurate and can account for the change of the joint angle and muscle dynamics. Then, eight healthy volunteers were recruited to wear the ankle exoskeleton robot and complete a series of sinusoidal tracking tasks in the vertical plane. With the various levels of assist based on the two calibrated models, the subjects were instructed to track the target displayed on the screen as accurately as possible by performing ankle dorsiflexion and plantarflexion. Two measurements, the root mean square error (RMSE) and root mean square jerk (RMSJ), were derived from the assistant torque and kinematic signals to characterize the movement performances, whereas the amplitudes of the recorded EMG signals from the tibialis anterior (TA) and the gastrocnemius (GAS) were obtained to reflect the muscular efforts. The results demonstrated that the muscular effort and smoothness of tracking movements decreased with an increase in the assistant ratio. Compared with LPM, subjects made lower physical efforts and generated smoother movements when using HNM, which implied that a more physiologically appropriate model could enable more natural and human-like human-robot cooperation and has potential value for improvement of human-exoskeleton interaction in future applications.

  8. Social Robotic Experience and Media Communication Practices: An Exploration on the Emotional and Ritualized Human-technology-relations

    Directory of Open Access Journals (Sweden)

    Christine Linke

    2013-01-01

    Full Text Available This article approaches the subject of social robots by focusing on the emotional relations people establish with media and information and communication technology (ICTs in their everyday life. It examines human-technology-relation from a social studies point of view, seeking to raise questions that enable us to make a connection between the research on human relationships and the topic of human-technology relation, especially human-humanoid-relation. In order to explore the human-technology-relations, theoretical ideas of a mediatization of communication and of a ritual interaction order are applied. Ritual theory is particularly used to enable a focus on emotion as a significant dimension in analyzing social technologies. This explorative article refers to empirical findings regarding media communication practices in close relationships. It argues that following the developed approach regarding mediatized and ritualized relational practices, useful insights for a conceptualization of the human-social robot relation can be achieved. The article concludes with remarks regarding the challenge of an empirical approach to human-social robot-relations.

  9. Task Refinement for Autonomous Robots using Complementary Corrective Human Feedback

    Directory of Open Access Journals (Sweden)

    Cetin Mericli

    2011-06-01

    Full Text Available A robot can perform a given task through a policy that maps its sensed state to appropriate actions. We assume that a hand-coded controller can achieve such a mapping only for the basic cases of the task. Refining the controller becomes harder and gets more tedious and error prone as the complexity of the task increases. In this paper, we present a new learning from demonstration approach to improve the robot's performance through the use of corrective human feedback as a complement to an existing hand-coded algorithm. The human teacher observes the robot as it performs the task using the hand-coded algorithm and takes over the control to correct the behavior when the robot selects a wrong action to be executed. Corrections are captured as new state-action pairs and the default controller output is replaced by the demonstrated corrections during autonomous execution when the current state of the robot is decided to be similar to a previously corrected state in the correction database. The proposed approach is applied to a complex ball dribbling task performed against stationary defender robots in a robot soccer scenario, where physical Aldebaran Nao humanoid robots are used. The results of our experiments show an improvement in the robot's performance when the default hand-coded controller is augmented with corrective human demonstration.

  10. Playful Interaction with Voice Sensing Modular Robots

    DEFF Research Database (Denmark)

    Heesche, Bjarke; MacDonald, Ewen; Fogh, Rune

    2013-01-01

    This paper describes a voice sensor, suitable for modular robotic systems, which estimates the energy and fundamental frequency, F0, of the user’s voice. Through a number of example applications and tests with children, we observe how the voice sensor facilitates playful interaction between child...... children and two different robot configurations. In future work, we will investigate if such a system can motivate children to improve voice control and explore how to extend the sensor to detect emotions in the user’s voice....

  11. More About Hazard-Response Robot For Combustible Atmospheres

    Science.gov (United States)

    Stone, Henry W.; Ohm, Timothy R.

    1995-01-01

    Report presents additional information about design and capabilities of mobile hazard-response robot called "Hazbot III." Designed to operate safely in combustible and/or toxic atmosphere. Includes cameras and chemical sensors helping human technicians determine location and nature of hazard so human emergency team can decide how to eliminate hazard without approaching themselves.

  12. Investigation of human-robot interface performance in household environments

    Science.gov (United States)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  13. Social and Affective Robotics Tutorial

    NARCIS (Netherlands)

    Pantic, Maja; Evers, Vanessa; Deisenroth, Marc; Merino, Luis; Schuller, Björn

    2016-01-01

    Social and Affective Robotics is a growing multidisciplinary field encompassing computer science, engineering, psychology, education, and many other disciplines. It explores how social and affective factors influence interactions between humans and robots, and how affect and social signals can be

  14. Handling uncertainty and networked structure in robot control

    CERN Document Server

    Tamás, Levente

    2015-01-01

    This book focuses on two challenges posed in robot control by the increasing adoption of robots in the everyday human environment: uncertainty and networked communication. Part I of the book describes learning control to address environmental uncertainty. Part II discusses state estimation, active sensing, and complex scenario perception to tackle sensing uncertainty. Part III completes the book with control of networked robots and multi-robot teams. Each chapter features in-depth technical coverage and case studies highlighting the applicability of the techniques, with real robots or in simulation. Platforms include mobile ground, aerial, and underwater robots, as well as humanoid robots and robot arms. Source code and experimental data are available at http://extras.springer.com. The text gathers contributions from academic and industry experts, and offers a valuable resource for researchers or graduate students in robot control and perception. It also benefits researchers in related areas, such as computer...

  15. Influences of a Socially Interactive Robot on the Affective Behavior of Young Children with Disabilities. Social Robots Research Reports, Number 3

    Science.gov (United States)

    Dunst, Carl J.; Prior, Jeremy; Hamby, Deborah W.; Trivette, Carol M.

    2013-01-01

    Findings from two studies of 11 young children with autism, Down syndrome, or attention deficit disorders investigating the effects of Popchilla, a socially interactive robot, on the children's affective behavior are reported. The children were observed under two conditions, child-toy interactions and child-robot interactions, and ratings of child…

  16. Team-oriented leadership: the interactive effects of leader group prototypicality, accountability, and team identification.

    Science.gov (United States)

    Giessner, Steffen R; van Knippenberg, Daan; van Ginkel, Wendy; Sleebos, Ed

    2013-07-01

    We examined the interactive effects of leader group prototypicality, accountability, and team identification on team-oriented behavior of leaders, thus extending the social identity perspective on leadership to the study of leader behavior. An experimental study (N = 152) supported our hypothesis that leader accountability relates more strongly to team-oriented behavior for group nonprototypical leaders than for group prototypical leaders. A multisource field study with leaders (N = 64) and their followers (N = 209) indicated that this interactive effect is more pronounced for leaders who identify more strongly with their team. We discuss how these findings further develop the social identity analysis of leadership. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. [Human-robot global Simulink modeling and analysis for an end-effector upper limb rehabilitation robot].

    Science.gov (United States)

    Liu, Yali; Ji, Linhong

    2018-02-01

    Robot rehabilitation has been a primary therapy method for the urgent rehabilitation demands of paralyzed patients after a stroke. The parameters in rehabilitation training such as the range of the training, which should be adjustable according to each participant's functional ability, are the key factors influencing the effectiveness of rehabilitation therapy. Therapists design rehabilitation projects based on the semiquantitative functional assessment scales and their experience. But these therapies based on therapists' experience cannot be implemented in robot rehabilitation therapy. This paper modeled the global human-robot by Simulink in order to analyze the relationship between the parameters in robot rehabilitation therapy and the patients' movement functional abilities. We compared the shoulder and elbow angles calculated by simulation with the angles recorded by motion capture system while the healthy subjects completed the simulated action. Results showed there was a remarkable correlation between the simulation data and the experiment data, which verified the validity of the human-robot global Simulink model. Besides, the relationship between the circle radius in the drawing tasks in robot rehabilitation training and the active movement degrees of shoulder as well as elbow was also matched by a linear, which also had a remarkable fitting coefficient. The matched linear can be a quantitative reference for the robot rehabilitation training parameters.

  18. Beaming into the rat world: enabling real-time interaction between rat and human each at their own scale.

    Directory of Open Access Journals (Sweden)

    Jean-Marie Normand

    Full Text Available Immersive virtual reality (IVR typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human's movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.

  19. Beaming into the rat world: enabling real-time interaction between rat and human each at their own scale.

    Science.gov (United States)

    Normand, Jean-Marie; Sanchez-Vives, Maria V; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A; Slater, Mel

    2012-01-01

    Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human's movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.

  20. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction.

    Directory of Open Access Journals (Sweden)

    Mahdi Khoramshahi

    Full Text Available The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game, whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a whether participants are able to exploit these gaze cues to improve their coordination, (b how gaze cues affect action prediction and temporal coordination, and (c whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view.43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues. In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT. This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar.This work confirms that people can exploit gaze cues to

  1. Measuring acceptance of an assistive social robot: a suggested toolkit

    NARCIS (Netherlands)

    Heerink, M.; Kröse, B.; Evers, V.; Wielinga, B.

    2009-01-01

    The human robot interaction community is multidisciplinary by nature and has members from social science to engineering backgrounds. In this paper we aim to provide human robot developers with a straightforward toolkit to evaluate users' acceptance of assistive social robots they are designing or

  2. 2nd Workshop on Evaluating Child Robot Interaction

    NARCIS (Netherlands)

    Zaga, Cristina; Lohse, M.; Charisi, Vasiliki; Evers, Vanessa; Neerincx, Marc; Kanda, Takayuki; Leite, Iolanda

    Many researchers have started to explore natural interaction scenarios for children. No matter if these children are normally developing or have special needs, evaluating Child-Robot Interaction (CRI) is a challenge. To find methods that work well and provide reliable data is difficult, for example

  3. Human-robot skills transfer interfaces for a flexible surgical robot.

    Science.gov (United States)

    Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G

    2014-09-01

    In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Towards culture-specific robot customization : a study on greeting interaction with Egyptians

    NARCIS (Netherlands)

    Trovato, G.; Zecca, M.; Sessa, S.; Jamone, L.; Ham, J.R.C.; Hashimoto, K.; Takanishi, A.

    2014-01-01

    A complex relationship exists between national cultural background and interaction with robots, and many earlier studies have investigated how people from different cultures perceive the inclusion of robots into society. Conversely, very few studies have investigated how robots, speaking and using

  5. Trajectory Planning for Robots in Dynamic Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Bak, Thomas; Andersen, Hans Jørgen

    2010-01-01

    This paper present a trajectory planning algorithm for a robot operating in dynamic human environments. Environments such as pedestrian streets, hospital corridors and train stations. We formulate the problem as planning a minimal cost trajectory through a potential field, defined from...... is enhanced to direct the search and account for the kinodynamic robot constraints. Compared to standard RRT, the algorithm proposed here find the robot control input that will drive the robot towards a new sampled point in the configuration space. The effect of the input is simulated, to add a reachable...

  6. Il circolo tecnologico: dall’uomo al robot e ritorno

    Directory of Open Access Journals (Sweden)

    BONITO OLIVA, ROSSELLA

    2017-12-01

    Full Text Available The technological Circle: from Man to Robot and return Robotics raised new questions in the already complex relationship between technology and ethics. Robots, more than any other machine, come close to human abilities of acting and interacting. Robots are created by human intelligence, they are perceived however through the collective imagery of post-humanistic culture. To reflect on the relation between robot and man means to investigate whether robots are a reflection of mankind, or if technologic ideology has slowly molded the subject: the man of the present is a robot.

  7. Interactions of Team Mental Models and Monitoring Behaviors Predict Team Performance in Simulated Anesthesia Inductions

    Science.gov (United States)

    Burtscher, Michael J.; Kolbe, Michaela; Wacker, Johannes; Manser, Tanja

    2011-01-01

    In the present study, we investigated how two team mental model properties (similarity vs. accuracy) and two forms of monitoring behavior (team vs. systems) interacted to predict team performance in anesthesia. In particular, we were interested in whether the relationship between monitoring behavior and team performance was moderated by team…

  8. New trends in medical and service robots human centered analysis, control and design

    CERN Document Server

    Chevallereau, Christine; Pisla, Doina; Bleuler, Hannes; Rodić, Aleksandar

    2016-01-01

    Medical and service robotics integrates several disciplines and technologies such as mechanisms, mechatronics, biomechanics, humanoid robotics, exoskeletons, and anthropomorphic hands. This book presents the most recent advances in medical and service robotics, with a stress on human aspects. It collects the selected peer-reviewed papers of the Fourth International Workshop on Medical and Service Robots, held in Nantes, France in 2015, covering topics on: exoskeletons, anthropomorphic hands, therapeutic robots and rehabilitation, cognitive robots, humanoid and service robots, assistive robots and elderly assistance, surgical robots, human-robot interfaces, BMI and BCI, haptic devices and design for medical and assistive robotics. This book offers a valuable addition to existing literature.

  9. Dialogues with social robots enablements, analyses, and evaluation

    CERN Document Server

    Wilcock, Graham

    2017-01-01

    This book explores novel aspects of social robotics, spoken dialogue systems, human-robot interaction, spoken language understanding, multimodal communication, and system evaluation. It offers a variety of perspectives on and solutions to the most important questions about advanced techniques for social robots and chat systems. Chapters by leading researchers address key research and development topics in the field of spoken dialogue systems, focusing in particular on three special themes: dialogue state tracking, evaluation of human-robot dialogue in social robotics, and socio-cognitive language processing. The book offers a valuable resource for researchers and practitioners in both academia and industry whose work involves advanced interaction technology and who are seeking an up-to-date overview of the key topics. It also provides supplementary educational material for courses on state-of-the-art dialogue system technologies, social robotics, and related research fields.

  10. Relationships to Social Robots: Towards a Triadic Analysis of Media-oriented Behavior

    Directory of Open Access Journals (Sweden)

    Joachim R. Höflich

    2013-01-01

    Full Text Available People are living in relationships not only to other people but also to media, including things and robots. As the theory of media equation suggests, people treat media as if they were real persons. This theoretical perspective is also relevant to the case of human-robot interaction. A distinctive feature of such interaction is that the relation to social robots also depends on the human likeness as an anthropomorphic perspective underlines. But people seem to prefer a certain imperfection; otherwise they feel uncanny. This paper explores the idea of robots as media and people’s relations to them. The paper argues that robots are seen not only in the context of a relationship with a medium but also as a medium that ‘mediates.’ This means that robots connect between the environment and other people, but can also divide them. The paper concludes with a proposed perspective that widens a dyadic model of human-robot-interaction towards a triadic analysis.

  11. Admittance Control for Robot Assisted Retinal Vein Micro-Cannulation under Human-Robot Collaborative Mode.

    Science.gov (United States)

    Zhang, He; Gonenc, Berk; Iordachita, Iulian

    2017-10-01

    Retinal vein occlusion is one of the most common retinovascular diseases. Retinal vein cannulation is a potentially effective treatment method for this condition that currently lies, however, at the limits of human capabilities. In this work, the aim is to use robotic systems and advanced instrumentation to alleviate these challenges, and assist the procedure via a human-robot collaborative mode based on our earlier work on the Steady-Hand Eye Robot and force-sensing instruments. An admittance control method is employed to stabilize the cannula relative to the vein and maintain it inside the lumen during the injection process. A pre-stress strategy is used to prevent the tip of microneedle from getting out of vein in in prolonged infusions, and the performance is verified through simulations.

  12. Humanoid Robot Head Design Based on Uncanny Valley and FACS

    Directory of Open Access Journals (Sweden)

    Jizheng Yan

    2014-01-01

    Full Text Available Emotional robots are always the focus of artificial intelligence (AI, and intelligent control of robot facial expression is a hot research topic. This paper focuses on the design of humanoid robot head, which is divided into three steps to achieve. The first step is to solve the uncanny valley about humanoid robot, to find and avoid the relationship between human being and robot; the second step is to solve the association between human face and robot head; compared with human being and robots, we analyze the similarities and differences and explore the same basis and mechanisms between robot and human analyzing the Facial Action Coding System (FACS, which guides us to achieve humanoid expressions. On the basis of the previous two steps, the third step is to construct a robot head; through a series of experiments we test the robot head, which could show some humanoid expressions; through human-robot interaction, we find people are surprised by the robot head expression and feel happy.

  13. Molecular robots with sensors and intelligence.

    Science.gov (United States)

    Hagiya, Masami; Konagaya, Akihiko; Kobayashi, Satoshi; Saito, Hirohide; Murata, Satoshi

    2014-06-17

    CONSPECTUS: What we can call a molecular robot is a set of molecular devices such as sensors, logic gates, and actuators integrated into a consistent system. The molecular robot is supposed to react autonomously to its environment by receiving molecular signals and making decisions by molecular computation. Building such a system has long been a dream of scientists; however, despite extensive efforts, systems having all three functions (sensing, computation, and actuation) have not been realized yet. This Account introduces an ongoing research project that focuses on the development of molecular robotics funded by MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan). This 5 year project started in July 2012 and is titled "Development of Molecular Robots Equipped with Sensors and Intelligence". The major issues in the field of molecular robotics all correspond to a feedback (i.e., plan-do-see) cycle of a robotic system. More specifically, these issues are (1) developing molecular sensors capable of handling a wide array of signals, (2) developing amplification methods of signals to drive molecular computing devices, (3) accelerating molecular computing, (4) developing actuators that are controllable by molecular computers, and (5) providing bodies of molecular robots encapsulating the above molecular devices, which implement the conformational changes and locomotion of the robots. In this Account, the latest contributions to the project are reported. There are four research teams in the project that specialize on sensing, intelligence, amoeba-like actuation, and slime-like actuation, respectively. The molecular sensor team is focusing on the development of molecular sensors that can handle a variety of signals. This team is also investigating methods to amplify signals from the molecular sensors. The molecular intelligence team is developing molecular computers and is currently focusing on a new photochemical technology for accelerating DNA

  14. A Kinect-Based Gesture Recognition Approach for a Natural Human Robot Interface

    Directory of Open Access Journals (Sweden)

    Grazia Cicirelli

    2015-03-01

    Full Text Available In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN classifiers are trained to recognize the different gestures. This work deals with different challenging tasks, such as the real-time implementation of a gesture recognition system and the temporal resolution of gestures. The HRI interface developed in this work includes three Kinect cameras placed at different locations in an indoor environment and an autonomous mobile robot that can be remotely controlled by one operator standing in front of one of the Kinects. Moreover, the system is supplied with a people re-identification module which guarantees that only one person at a time has control of the robot. The system's performance is first validated offline, and then online experiments are carried out, proving the real-time operation of the system as required by a HRI interface.

  15. Toward an Ontology of Simulated Social Interaction

    DEFF Research Database (Denmark)

    Seibt, Johanna

    2017-01-01

    and asymmetric modes of realizing Á, called the ‘simulatory expansion’ of interaction type Á. Simulatory expansions of social interactions can be used to map out different kinds and degrees of sociality in human-human and human-robot interaction, relative to current notions of sociality in philosophy......, anthropology, and linguistics. The classificatory framework developed (SISI) thus represents the field of possible simulated social interactions. SISI can be used to clarify which conceptual and empirical grounds we can draw on to evaluate capacities and affordances of robots for social interaction......The paper develops a general conceptual framework for the ontological classification of human-robot interaction. After arguing against fictionalist interpretations of human-robot interactions, I present five notions of simulation or partial realization, formally defined in terms of relationships...

  16. Quantifying Age-Related Differences in Human Reaching while Interacting with a Rehabilitation Robotic Device

    Directory of Open Access Journals (Sweden)

    Vivek Yadav

    2010-01-01

    Full Text Available New movement assessment and data analysis methods are developed to quantify human arm motion patterns during physical interaction with robotic devices for rehabilitation. These methods provide metrics for future use in diagnosis, assessment and rehabilitation of subjects with affected arm movements. Specifically, the current study uses existing pattern recognition methods to evaluate the effect of age on performance of a specific motion, reaching to a target by moving the end-effector of a robot (an X-Y table. Differences in the arm motion patterns of younger and older subjects are evaluated using two measures: the principal component analysis similarity factor (SPCA to compare path shape and the number of Fourier modes representing 98% of the path ‘energy’ to compare the smoothness of movement, a particularly important variable for assessment of pathologic movement. Both measures are less sensitive to noise than others previously reported in the literature and preserve information that is often lost through other analysis techniques. Data from the SPCA analysis indicate that age is a significant factor affecting the shapes of target reaching paths, followed by reaching movement type (crossing body midline/not crossing and reaching side (left/right; hand dominance and trial repetition are not significant factors. Data from the Fourier-based analysis likewise indicate that age is a significant factor affecting smoothness of movement, and movements become smoother with increasing trial number in both younger and older subjects, although more rapidly so in younger subjects. These results using the proposed data analysis methods confirm current practice that age-matched subjects should be used for comparison to quantify recovery of arm movement during rehabilitation. The results also highlight the advantages that these methods offer relative to other reported measures.

  17. iCub-HRI: A Software Framework for Complex Human–Robot Interaction Scenarios on the iCub Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Tobias Fischer

    2018-03-01

    Full Text Available Generating complex, human-like behavior in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, and touch detection, object manipulation (basic and complex motor actions, and social interaction (speech synthesis and joint attention exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human–robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behavior and expression of a verbal narrative of the robot’s experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarizing themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.

  18. Introduction of symbiotic human-robot-cooperation in the steel sector: an example of social innovation

    Science.gov (United States)

    Colla, Valentina; Schroeder, Antonius; Buzzelli, Andrea; Abbà, Dario; Faes, Andrea; Romaniello, Lea

    2018-05-01

    The introduction of new technologies, which can support and empower human capabilities in a number of professional tasks while possibly reducing the need for cumbersome operations and the exposure to risk and professional diseases, is nowadays perceived as a must in any industrial field, process industry included. However, despite their relevant potentials, new technologies are not always easy to introduce in the professional environment. A design procedure which takes into account the workers' acceptance, needing and capabilities as well as a continuing education and training process of the personnel who must exploit the innovation, is as fundamental as the technical reliability for the successful introduction of any new technology in a professional environment. An exemplary case is provided by symbiotic human-robot-cooperation. In the steel sector, the difficulties for the implementation of symbiotic human-robot-cooperation is bigger with respect to the manufacturing sector, due to the environmental conditions, which in some cases are not favorable to robots. On the other hand, the opportunities and potential advantages are also greater, as robots could replace human operators in repetitive, heavy tasks, by improving workers' health and safety. The present paper provides an example of the potential and opportunities of human-robot interaction and discusses how this approach can be included in a social innovation paradigm. Moreover, an example will be provided of an ongoing project funded by the Research Fund for Coal and Steel, "ROBOHARSH", which aims at implementing such approach in the steel industry, in order to develop a very sensitive task, i.e. the replacement of the refractory components of the ladle sliding gate.

  19. Studying Robots Outside the Lab

    DEFF Research Database (Denmark)

    Blond, Lasse

    and ethnographic studies will enhance understandings of the dynamics of HRI. Furthermore, the paper emphasizes how users and the context of use matters to integration of robots, as it is shown how roboticists are unable to control how their designs are implemented in practice and that the sociality of social...... robots is inscribed by its users in social practice. This paper can be seen as a contribution to studies of long-term HRI. It presents the challenges of robot adaptation in practice and discusses the limitations of the present conceptual understanding of human-robotic relations. The ethnographic data......As more and more robots enter our social world there is a strong need for further field studies of human-robotic interaction. Based on a two-year ethnographic study of the implementation of the South Korean socially assistive robot in Danish elderly care this paper argues that empirical...

  20. Information theory and robotics meet to study predator-prey interactions

    Science.gov (United States)

    Neri, Daniele; Ruberto, Tommaso; Cord-Cruz, Gabrielle; Porfiri, Maurizio

    2017-07-01

    Transfer entropy holds promise to advance our understanding of animal behavior, by affording the identification of causal relationships that underlie animal interactions. A critical step toward the reliable implementation of this powerful information-theoretic concept entails the design of experiments in which causal relationships could be systematically controlled. Here, we put forward a robotics-based experimental approach to test the validity of transfer entropy in the study of predator-prey interactions. We investigate the behavioral response of zebrafish to a fear-evoking robotic stimulus, designed after the morpho-physiology of the red tiger oscar and actuated along preprogrammed trajectories. From the time series of the positions of the zebrafish and the robotic stimulus, we demonstrate that transfer entropy correctly identifies the influence of the stimulus on the focal subject. Building on this evidence, we apply transfer entropy to study the interactions between zebrafish and a live red tiger oscar. The analysis of transfer entropy reveals a change in the direction of the information flow, suggesting a mutual influence between the predator and the prey, where the predator adapts its strategy as a function of the movement of the prey, which, in turn, adjusts its escape as a function of the predator motion. Through the integration of information theory and robotics, this study posits a new approach to study predator-prey interactions in freshwater fish.