WorldWideScience

Sample records for human collaborative robot

  1. Communication of Robot Status to Improve Human-Robot Collaboration

    Data.gov (United States)

    National Aeronautics and Space Administration — Future space exploration will require humans and robots to collaborate to perform all the necessary tasks. Current robots mostly operate separately from humans due...

  2. Industrial Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Philipsen, Mark Philip; Rehm, Matthias; Moeslund, Thomas B.

    2018-01-01

    In the future, robots are envisioned to work side by side with humans in dynamic environments both in production contexts but also more and more in societal context like health care, education, or commerce. This will require robots to become socially accepted, to become able to analyze human...... intentions in meaningful ways, and to become proactive. It is our conviction that this can only be achieved on the basis of a tight combination of multimodal signal processing and AI techniques in real application context....

  3. Human-robot collaboration for a shared mission

    OpenAIRE

    Karami , Abir-Beatrice; Jeanpierre , Laurent; Mouaddib , Abdel-Illah

    2010-01-01

    International audience; We are interested in collaboration domains between a robot and a human partner, the partners share a common mission without an explicit communication about their plans. The decision process of the robot agent should consider the presence of its human partner. Also, the robot planning should be flexible to human comfortability and all possible changes in the shared environment. To solve the problem of human-robot collaborationwith no communication, we present a model th...

  4. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, ...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots....

  5. An Integrated Framework for Human-Robot Collaborative Manipulation.

    Science.gov (United States)

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  6. Learning Semantics of Gestural Instructions for Human-Robot Collaboration

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888

  7. Learning Semantics of Gestural Instructions for Human-Robot Collaboration.

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.

  8. Essential technologies for developing human and robot collaborative system

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1997-10-01

    In this study, we aim to develop a concept of new robot system, i.e., 'human and robot collaborative system', for the patrol of nuclear power plants. This paper deals with the two essential technologies developed for the system. One is the autonomous navigation program with human intervention function which is indispensable for human and robot collaboration. The other is the position estimation method by using gyroscope and TV image to make the estimation accuracy much higher for safe navigation. Feasibility of the position estimation method is evaluated by experiment and numerical simulation. (author)

  9. Interactive Exploration Robots: Human-Robotic Collaboration and Interactions

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.

  10. Human-Robot Teaming: Communication, Coordination, and Collaboration

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    In this talk, I will describe how NASA Ames has been studying how human-robot teams can increase the performance, reduce the cost, and increase the success of a variety of endeavors. The central premise of our work is that humans and robots should support one another in order to compensate for limitations of automation and manual control. This principle has broad applicability to a wide range of domains, environments, and situations. At the same time, however, effective human-robot teaming requires communication, coordination, and collaboration -- all of which present significant research challenges. I will discuss some of the ways that NASA Ames is addressing these challenges and present examples of our work involving planetary rovers, free-flying robots, and self-driving cars.

  11. Digital twins of human robot collaboration in a production setting

    DEFF Research Database (Denmark)

    Malik, Ali Ahmad; Bilberg, Arne

    2018-01-01

    This paper aims to present a digital twin framework to support the design, build and control of human-machine cooperation. In this study, computer simulations are used to develop a digital counterpart of a human-robot collaborative work environment for assembly work. The digital counterpart remains...... updated during the life cycle of the production system by continuously mirroring the physical system for quick and safe embed for continuous improvements. The case of a manufacturing company with human-robot work teams is presented for developing and validating the digital twin framework....

  12. Well done, Robot! : the importance of praise and presence in human-robot collaboration

    NARCIS (Netherlands)

    Reichenbach, J.; Bartneck, C.; Carpenter, J.; Dautenhahn, K.

    2006-01-01

    This study reports on an experiment in which participants had to collaborate with either another human or a robot (partner). The robot would either be present in the room or only be represented on the participants' computer screen (presence). Furthermore, the participants' partner would either make

  13. The Age of Human-Robot Collaboration: Deep Sea Exploration

    KAUST Repository

    Khatib, Oussama

    2018-01-18

    The promise of oceanic discovery has intrigued scientists and explorers for centuries, whether to study underwater ecology and climate change, or to uncover natural resources and historic secrets buried deep at archaeological sites. Reaching these depth is imperative since factors such as pollution and deep-sea trawling increasingly threaten ecology and archaeological sites. These needs demand a system deploying human-level expertise at the depths, and yet remotely operated vehicles (ROVs) are inadequate for the task. To meet the challenge of dexterous operation at oceanic depths, in collaboration with KAUSTメs Red Sea Research Center and MEKA Robotics, Oussama Khatib and the team developed Ocean One, a bimanual humanoid robot that brings immediate and intuitive haptic interaction to oceanic environments. Introducing Ocean One, the haptic robotic avatar During this lecture, Oussama Khatib will talk about how teaming with the French Ministry of Cultureメs Underwater Archaeology Research Department, they deployed Ocean One in an expedition in the Mediterranean to Louis XIVメs flagship Lune, lying off the coast of Toulon at ninety-one meters. In the spring of 2016, Ocean One became the first robotic avatar to embody a humanメs presence at the seabed. Ocean Oneメs journey in the Mediterranean marks a new level of marine exploration: Much as past technological innovations have impacted society, Ocean Oneメs ability to distance humans physically from dangerous and unreachable work spaces while connecting their skills, intuition, and experience to the task promises to fundamentally alter remote work. Robotic avatars will search for and acquire materials, support equipment, build infrastructure, and perform disaster prevention and recovery operations - be it deep in oceans and mines, at mountain tops, or in space.

  14. Trends in control and decision-making for human-robot collaboration systems

    CERN Document Server

    Zhang, Fumin

    2017-01-01

    This book provides an overview of recent research developments in the automation and control of robotic systems that collaborate with humans. A measure of human collaboration being necessary for the optimal operation of any robotic system, the contributors exploit a broad selection of such systems to demonstrate the importance of the subject, particularly where the environment is prone to uncertainty or complexity. They show how such human strengths as high-level decision-making, flexibility, and dexterity can be combined with robotic precision, and ability to perform task repetitively or in a dangerous environment. The book focuses on quantitative methods and control design for guaranteed robot performance and balanced human experience. Its contributions develop and expand upon material presented at various international conferences. They are organized into three parts covering: one-human–one-robot collaboration; one-human–multiple-robot collaboration; and human–swarm collaboration. Individual topic ar...

  15. Collaborative Tools for Mixed Teams of Humans and Robots

    National Research Council Canada - National Science Library

    Bruemmer, David J; Walton, Miles C

    2003-01-01

    .... Our approach has been to consider the air vehicles, ground robots and humans as team members with different levels of authority, different communication, processing, power and mobility capabilities...

  16. Admittance Control for Robot Assisted Retinal Vein Micro-Cannulation under Human-Robot Collaborative Mode

    Science.gov (United States)

    Gonenc, Berk; Iordachita, Iulian

    2017-01-01

    Retinal vein occlusion is one of the most common retinovascular diseases. Retinal vein cannulation is a potentially effective treatment method for this condition that currently lies, however, at the limits of human capabilities. In this work, the aim is to use robotic systems and advanced instrumentation to alleviate these challenges, and assist the procedure via a human-robot collaborative mode based on our earlier work on the Steady-Hand Eye Robot and force-sensing instruments. An admittance control method is employed to stabilize the cannula relative to the vein and maintain it inside the lumen during the injection process. A pre-stress strategy is used to prevent the tip of microneedle from getting out of vein in in prolonged infusions, and the performance is verified through simulations. PMID:29607442

  17. Admittance Control for Robot Assisted Retinal Vein Micro-Cannulation under Human-Robot Collaborative Mode.

    Science.gov (United States)

    Zhang, He; Gonenc, Berk; Iordachita, Iulian

    2017-10-01

    Retinal vein occlusion is one of the most common retinovascular diseases. Retinal vein cannulation is a potentially effective treatment method for this condition that currently lies, however, at the limits of human capabilities. In this work, the aim is to use robotic systems and advanced instrumentation to alleviate these challenges, and assist the procedure via a human-robot collaborative mode based on our earlier work on the Steady-Hand Eye Robot and force-sensing instruments. An admittance control method is employed to stabilize the cannula relative to the vein and maintain it inside the lumen during the injection process. A pre-stress strategy is used to prevent the tip of microneedle from getting out of vein in in prolonged infusions, and the performance is verified through simulations.

  18. Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard; Bøgh, Simon; Ceballos, Iker

    Collaborative industrial robots are creating new opportunities for collaboration between humans and robots in shared workspaces. In order for such collaboration to be efficient, robots - as well as humans - need to have an understanding of the other's intentions and current ongoing action....... In this work, we propose a method for learning, classifying, and predicting actions taken by a human. Our proposed method is based on the human skeleton model from a Kinect. For demonstration of our approach we chose a typical pick-and-place scenario. Therefore, only arms and upper body are considered......; in total 15 joints. Based on trajectories on these joints, different classes of motion are separated using partitioning around medoids (PAM). Subsequently, SVM is used to train the classes to form a library of human motions. The approach allows run-time detection of when a new motion has been initiated...

  19. An Augmented Discrete-Time Approach for Human-Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Peidong Liang

    2016-01-01

    Full Text Available Human-robot collaboration (HRC is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios.

  20. Effective Human-Robot Collaborative Work for Critical Missions

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to improve human-robot interaction (HRI) in order to enhance the capability of NASA critical missions. This research will focus two...

  1. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-03-01

    Full Text Available NASA's vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  2. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-11-01

    Full Text Available NASA?s vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  3. Designing human-robot collaborations in industry 4.0: explorative case studies

    DEFF Research Database (Denmark)

    Kadir, Bzhwen A; Broberg, Ole; Souza da Conceição, Carolina

    2018-01-01

    We are experiencing an increase in human-robot interactions and the use of collaborative robots (cobots) in industrial work systems. To make full use of cobots, it is essential to understand emerging challenges and opportunities. In this paper, we analyse three successful industrial case studies...... of cobots’ implementation. We highlight the top three challenges and opportunities, from the empirical evidence, relate them to current available literature on the topic, and use them to identify key design factor to consider when designing industrial work system with human-robot collaborations....

  4. Analyzing the effects of human-aware motion planning on close-proximity human-robot collaboration.

    Science.gov (United States)

    Lasota, Przemyslaw A; Shah, Julie A

    2015-02-01

    The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human-robot interaction. We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human-robot team fluency and human worker satisfaction. Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human-robot collaboration.

  5. Sharing skills: using augmented reality for human-robot collaboration

    Science.gov (United States)

    Giesler, Bjorn; Steinhaus, Peter; Walther, Marcus; Dillmann, Ruediger

    2004-05-01

    Both stationary 'industrial' and autonomous mobile robots nowadays pervade many workplaces, but human-friendly interaction with them is still very much an experimental subject. One of the reasons for this is that computer and robotic systems are very bad at performing certain tasks well and robust. A prime example is classification of sensor readings: Which part of a 3D depth image is the cup, which the saucer, which the table? These are tasks that humans excel at. To alleviate this problem, we propose a team approah, wherein the robot records sensor data and uses an Augmented-Reality (AR) system to present the data to the user directly in the 3D environment. The user can then perform classification decisions directly on the data by pointing, gestures and speech commands. After the classification has been performed by the user, the robot takes the classified data and matches it to its environment model. As a demonstration of this approach, we present an initial system for creating objects on-the-fly in the environment model. A rotating laser scanner is used to capture a 3D snapshot of the environment. This snapshot is presented to the user as an overlay over his view of the scene. The user classifies unknown objects by pointing at them. The system segments the snapshot according to the user's indications and presents the results of segmentation back to the user, who can then inspect, correct and enhance them interactively. After a satisfying result has been reached, the laser-scanner can take more snapshots from other angles and use the previous segmentation hints to construct a 3D model of the object.

  6. Semi-manual mastoidectomy assisted by human-robot collaborative control - A temporal bone replica study.

    Science.gov (United States)

    Lim, Hoon; Matsumoto, Nozomu; Cho, Byunghyun; Hong, Jaesung; Yamashita, Makoto; Hashizume, Makoto; Yi, Byung-Ju

    2016-04-01

    To develop an otological robot that can protect important organs from being injured. We developed a five degree-of-freedom robot for otological surgery. Unlike the other robots that were reported previously, our robot does not replace surgeon's procedures, but instead utilizes human-robot collaborative control. The robot basically releases all of the actuators so that the surgeon can manipulate the drill within the robot's working area with minimal restriction. When the drill reaches a forbidden area, the surgeon feels as if the drill hits a wall. When an engineer performed mastoidectomy using the robot for assistance, the facial nerve in the segmented region was always protected with a more than 2.5mm margin, which was almost the same as the pre-set safety margin of 3mm. Semi-manual drilling using human-robot collaborative control was feasible, and may hold a realistic prospect of clinical use in the near future. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Human-robot collaborative navigation for autonomous maintenance management of nuclear installation

    International Nuclear Information System (INIS)

    Nugroho, Djoko Hari

    2002-01-01

    Development of human and robot collaborative navigation for autonomous maintenance management of nuclear installation has been conducted. The human-robot collaborative system is performed using a switching command between autonomous navigation and manual navigation that incorporate a human intervention. The autonomous navigation path is conducted using a novel algorithm of MLG method based on Lozano-Perez s visibility graph. The MLG optimizes the shortest distance and safe constraints. While the manual navigation is performed using manual robot tele operation tools. Experiment in the MLG autonomous navigation system is conducted for six times with 3-D starting point and destination point coordinate variation. The experiment shows a good performance of autonomous robot maneuver to avoid collision with obstacle. The switching navigation is well interpreted using open or close command to RS-232C constructed using LabVIEW

  8. Social cobots: Anticipatory decision-making for collaborative robots incorporating unexpected human behaviors

    CSIR Research Space (South Africa)

    Can Görür, O

    2018-03-01

    Full Text Available We propose an architecture as a robot’s decision-making mechanism to anticipate a human’s state of mind, and so plan accordingly during a human-robot collaboration task. At the core of the architecture lies a novel stochastic decision...

  9. Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human–Robot Collaboration

    Science.gov (United States)

    Shah, Julie A.

    2015-01-01

    Objective: The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. Background: The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction. Method: We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. Results: When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. Conclusion: People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human–robot team fluency and human worker satisfaction. Application: Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human–robot collaboration. PMID:25790568

  10. Novel trends in the assembly process as the results of human – the industrial robot collaboration

    Directory of Open Access Journals (Sweden)

    Holubek Radovan

    2017-01-01

    Full Text Available The contribution is focused on the creation of an idea proposal and simulation of the assembly system in cooperation of the human and the industrial robot. The aim of the research is to verify the feasibility of this cooperation between the human and the industrial robot on the basis of the created simulation in the assembly process. The important step of the design this collaboration is the determination of rules and safety of this cooperation. The paper also presents the method of working with the selected software and its functionalities and sequence of steps at the simulation creation. The objective of the research is the evaluation of the idea proposal of the collaborative assembly system on the basis of the created simulation. The analysis and evaluation of the simulation confirm the feasibility and safety of the cooperation of the man and robot and also verified the possibility of assembly made by man and robot from the disposition and dimension on point of view of the proposed workplace.

  11. Attributing Agency to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility-Loci.

    Science.gov (United States)

    Nyholm, Sven

    2017-07-18

    Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human-robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

  12. Human-Robot Collaboration Dynamic Impact Testing and Calibration Instrument for Disposable Robot Safety Artifacts.

    Science.gov (United States)

    Dagalakis, Nicholas G; Yoo, Jae Myung; Oeste, Thomas

    2016-01-01

    The Dynamic Impact Testing and Calibration Instrument (DITCI) is a simple instrument with a significant data collection and analysis capability that is used for the testing and calibration of biosimulant human tissue artifacts. These artifacts may be used to measure the severity of injuries caused in the case of a robot impact with a human. In this paper we describe the DITCI adjustable impact and flexible foundation mechanism, which allows the selection of a variety of impact force levels and foundation stiffness. The instrument can accommodate arrays of a variety of sensors and impact tools, simulating both real manufacturing tools and the testing requirements of standards setting organizations. A computer data acquisition system may collect a variety of impact motion, force, and torque data, which are used to develop a variety of mathematical model representations of the artifacts. Finally, we describe the fabrication and testing of human abdomen soft tissue artifacts, used to display the magnitude of impact tissue deformation. Impact tests were performed at various maximum impact force and average pressure levels.

  13. Developing Principles for Effective Human Collaboration with Free-Flying Robots

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerial robots hold great promise in supporting human activities during space missions and terrestrial operations. For example, free-flying robots may automate...

  14. Modeling and Control of Collaborative Robot System using Haptic Feedback

    Directory of Open Access Journals (Sweden)

    Vivekananda Shanmuganatha

    2017-08-01

    Full Text Available When two robot systems can share understanding using any agreed knowledge, within the constraints of the system’s communication protocol, the approach may lead to a common improvement. This has persuaded numerous new research inquiries in human-robot collaboration. We have built up a framework prepared to do independent following and performing table-best protest object manipulation with humans and we have actualized two different activity models to trigger robot activities. The idea here is to explore collaborative systems and to build up a plan for them to work in a collaborative environment which has many benefits to a single more complex system. In the paper, two robots that cooperate among themselves are constructed. The participation linking the two robotic arms, the torque required and parameters are analyzed. Thus the purpose of this paper is to demonstrate a modular robot system which can serve as a base on aspects of robotics in collaborative robots using haptics.

  15. A cadaver study of mastoidectomy using an image-guided human-robot collaborative control system.

    Science.gov (United States)

    Yoo, Myung Hoon; Lee, Hwan Seo; Yang, Chan Joo; Lee, Seung Hwan; Lim, Hoon; Lee, Seongpung; Yi, Byung-Ju; Chung, Jong Woo

    2017-10-01

    Surgical precision would be better achieved with the development of an anatomical monitoring and controlling robot system than by traditional surgery techniques alone. We evaluated the feasibility of robot-assisted mastoidectomy in terms of duration, precision, and safety. Human cadaveric study. We developed a multi-degree-of-freedom robot system for a surgical drill with a balancing arm. The drill system is manipulated by the surgeon, the motion of the drill burr is monitored by the image-guided system, and the brake is controlled by the robotic system. The system also includes an alarm as well as the brake to help avoid unexpected damage to vital structures. Experimental mastoidectomy was performed in 11 temporal bones of six cadavers. Parameters including duration and safety were assessed, as well as intraoperative damage, which was judged via pre- and post-operative computed tomography. The duration of mastoidectomy in our study was comparable with that required for chronic otitis media patients. Although minor damage, such as dura exposure without tearing, was noted, no critical damage to the facial nerve or other important structures was observed. When the brake system was set to 1 mm from the facial nerve, the postoperative average bone thicknesses of the facial nerve was 1.39, 1.41, 1.22, 1.41, and 1.55 mm in the lateral, posterior pyramidal and anterior, lateral, and posterior mastoid portions, respectively. Mastoidectomy can be successfully performed using our robot-assisted system while maintaining a pre-set limit of 1 mm in most cases. This system may thus be useful for more inexperienced surgeons. NA.

  16. Robotic and Human-Tended Collaborative Drilling Automation for Subsurface Exploration

    Science.gov (United States)

    Glass, Brian; Cannon, Howard; Stoker, Carol; Davis, Kiel

    2005-01-01

    Future in-situ lunar/martian resource utilization and characterization, as well as the scientific search for life on Mars, will require access to the subsurface and hence drilling. Drilling on Earth is hard - an art form more than an engineering discipline. Human operators listen and feel drill string vibrations coming from kilometers underground. Abundant mass and energy make it possible for terrestrial drilling to employ brute-force approaches to failure recovery and system performance issues. Space drilling will require intelligent and autonomous systems for robotic exploration and to support human exploration. Eventual in-situ resource utilization will require deep drilling with probable human-tended operation of large-bore drills, but initial lunar subsurface exploration and near-term ISRU will be accomplished with lightweight, rover-deployable or standalone drills capable of penetrating a few tens of meters in depth. These lightweight exploration drills have a direct counterpart in terrestrial prospecting and ore-body location, and will be designed to operate either human-tended or automated. NASA and industry now are acquiring experience in developing and building low-mass automated planetary prototype drills to design and build a pre-flight lunar prototype targeted for 2011-12 flight opportunities. A successful system will include development of drilling hardware, and automated control software to operate it safely and effectively. This includes control of the drilling hardware, state estimation of both the hardware and the lithography being drilled and state of the hole, and potentially planning and scheduling software suitable for uncertain situations such as drilling. Given that Humans on the Moon or Mars are unlikely to be able to spend protracted EVA periods at a drill site, both human-tended and robotic access to planetary subsurfaces will require some degree of standalone, autonomous drilling capability. Human-robotic coordination will be important

  17. Model-based systems engineering to design collaborative robotics applications

    NARCIS (Netherlands)

    Hernandez Corbato, Carlos; Fernandez-Sanchez, Jose Luis; Rassa, Bob; Carbone, Paolo

    2017-01-01

    Novel robot technologies are becoming available to automate more complex tasks, more flexibly, and collaborating with humans. Methods and tools are needed in the automation and robotics industry to develop and integrate this new breed of robotic systems. In this paper, the ISE&PPOOA

  18. Socially intelligent robots: dimensions of human-robot interaction.

    Science.gov (United States)

    Dautenhahn, Kerstin

    2007-04-29

    Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.

  19. Easy Reconfiguration of Modular Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper

    2016-01-01

    the production staff collaborating to perform common tasks. This change of environment imposes a much more dynamic lifecycle for the robot which consequently requires new ways of interacting. This thesis investigates how the changeover to a new task on a collaborative robot can be performed by the shop floor...... operators already working alongside the robot. To effectively perform this changeover, the operator must both reconfigure the hardware of the robot and reprogram the robot to match the new task. To enable shop floor operators to quickly and intuitively program the robot, this thesis proposes the use...... of parametric, task-related robot skills with a manual parameterization method. Reconfiguring the hardware entails adding, removing, or modifying some of the robot’s components. This thesis investigate how software configurator tools can aid the operator in selecting appropriate hardware modules, and how agent...

  20. Distributed, Collaborative Human-Robotic Networks for Outdoor Experiments in Search, Identify and Track

    Science.gov (United States)

    2011-01-11

    and its variance σ2Ûi are determined. Ûi = ûi + Pu,EN (PEN )−1 [( Ejc Njc ) − ( êi n̂i )] (15) σ2 Ûi = Pui − P u,EN i ( PENi )−1 PEN,ui (16) where...screen; the operator can click a robot’s camera view to select it as the Focus Robot. The Focus Robot’s camera stream is enlarged and displayed in the

  1. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  2. Human-Robot Interaction: Status and Challenges.

    Science.gov (United States)

    Sheridan, Thomas B

    2016-06-01

    The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.

  3. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  4. Human-Robot Planetary Exploration Teams

    Science.gov (United States)

    Tyree, Kimberly

    2004-01-01

    areas of our research are safety and crew time efficiency. For safety, our work involves enabling humans to reliably communicate with a robot while moving in the same workspace, and enabling robots to monitor and advise humans of potential problems. Voice, gesture, remote computer control, and enhanced robot intelligence are methods we are studying. For crew time efficiency, we are investigating the effects of assigning different roles to humans and robots in collaborative exploration scenarios.

  5. Physical Human Robot Interaction for a Wall Mounting Robot - External Force Estimation

    DEFF Research Database (Denmark)

    Alonso García, Alejandro; Villarmarzo Arruñada, Noelia; Pedersen, Rasmus

    2018-01-01

    The use of collaborative robots enhances human capabilities, leading to better working conditions and increased productivity. In building construction, such robots are needed, among other tasks, to install large glass panels, where the robot takes care of the heavy lifting part of the job while...

  6. Implementing Speed and Separation Monitoring in Collaborative Robot Workcells

    Science.gov (United States)

    Marvel, Jeremy A.; Norcross, Rick

    2016-01-01

    We provide an overview and guidance for the Speed and Separation Monitoring methodology as presented in the International Organization of Standardization's technical specification 15066 on collaborative robot safety. Such functionality is provided by external, intelligent observer systems integrated into a robotic workcell. The SSM minimum protective distance function equation is discussed in detail, with consideration for the input values, implementation specifications, and performance expectations. We provide analytical analyses and test results of the current equation, discuss considerations for implementing SSM in human-occupied environments, and provide directions for technological advancements toward standardization. PMID:27885312

  7. Human - Robot Proximity

    DEFF Research Database (Denmark)

    Nickelsen, Niels Christian Mossfeldt

    The media and political/managerial levels focus on the opportunities to re-perform Denmark through digitization. Feeding assistive robotics is a welfare technology, relevant to citizens with low or no function in their arms. Despite national dissemination strategies, it proves difficult to recruit...... the study that took place as multi-sited ethnography at different locations in Denmark and Sweden. Based on desk research, observation of meals and interviews I examine socio-technological imaginaries and their practical implications. Human - robotics interaction demands engagement and understanding...

  8. Humans and Robots. Educational Brief.

    Science.gov (United States)

    National Aeronautics and Space Administration, Washington, DC.

    This brief discusses human movement and robotic human movement simulators. The activity for students in grades 5-12 provides a history of robotic movement and includes making an End Effector for the robotic arms used on the Space Shuttle and the International Space Station (ISS). (MVL)

  9. Collaboration Layer for Robots in Mobile Ad-hoc Networks

    DEFF Research Database (Denmark)

    Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e

    2009-01-01

    In many applications multiple robots in Mobile Ad-hoc Networks are required to collaborate in order to solve a task. This paper shows by proof of concept that a Collaboration Layer can be modelled and designed to handle the collaborative communication, which enables robots in small to medium size...

  10. A Plug and Produce Framework for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2017-01-01

    Collaborative robots are today ever more interesting in response to the increasing need for agile manufacturing equipment. Contrary to traditional industrial robots, collaborative robots are intended for working in dynamic environments alongside the production staff. To cope with the dynamic...... environment and workflow, new configuration and control methods are needed compared to those of traditional industrial robots. The new methods should enable shop floor operators to reconfigure the robot. This article presents a plug and produce framework for industrial collaborative robots. The article...... focuses on the control framework enabling quick and easy exchange of hardware modules as an approach to achieving plug and produce. To solve this, an agent-based system is proposed building on top of the robot operating system. The framework enables robot operating system packages to be adapted...

  11. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Mazen Ghandour

    2017-06-01

    Full Text Available In this paper, a novel approach for collision avoidance for indoor mobile robots based on human-robot interaction is realized. The main contribution of this work is a new technique for collision avoidance by engaging the human and the robot in generating new collision-free paths. In mobile robotics, collision avoidance is critical for the success of the robots in implementing their tasks, especially when the robots navigate in crowded and dynamic environments, which include humans. Traditional collision avoidance methods deal with the human as a dynamic obstacle, without taking into consideration that the human will also try to avoid the robot, and this causes the people and the robot to get confused, especially in crowded social places such as restaurants, hospitals, and laboratories. To avoid such scenarios, a reactive-supervised collision avoidance system for mobile robots based on human-robot interaction is implemented. In this method, both the robot and the human will collaborate in generating the collision avoidance via interaction. The person will notify the robot about the avoidance direction via interaction, and the robot will search for the optimal collision-free path on the selected direction. In case that no people interacted with the robot, it will select the navigation path autonomously and select the path that is closest to the goal location. The humans will interact with the robot using gesture recognition and Kinect sensor. To build the gesture recognition system, two models were used to classify these gestures, the first model is Back-Propagation Neural Network (BPNN, and the second model is Support Vector Machine (SVM. Furthermore, a novel collision avoidance system for avoiding the obstacles is implemented and integrated with the HRI system. The system is tested on H20 robot from DrRobot Company (Canada and a set of experiments were implemented to report the performance of the system in interacting with the human and avoiding

  12. Humanoid Robots and Human Society

    OpenAIRE

    Bahishti, Adam A

    2017-01-01

    Almost every aspect of modern human life starting from the smartphone to the smart houses you live in has been influenced by science and technology. The field of science and technology has advanced throughout the last few decades. Among those advancements, robots have become significant by managing most of our day-to-day tasks and trying to get close to human lives. As robotics and autonomous systems flourish, human-robot relationships are becoming increasingly important. Recently humanoid ro...

  13. The SHERPA project: Smart collaboration between humans and ground-aerial robots for improving rescuing activities in alpine environments

    NARCIS (Netherlands)

    Marconi, L.; Melchiorri, C.; Beetz, M.; Pangercic, D.; Siegwart, R.; Leutenegger, S.; Carloni, Raffaella; Stramigioli, Stefano; Bruyninckx, H.; Doherty, P.; Kleiner, A.; Lippiello, V.; Finzi, A.; Siciliano, B.; Sala, A.; Tomatis, N.

    2012-01-01

    The goal of the paper is to present the foreseen research activity of the European project “SHERPA‿ whose activities will start officially on February 1th 2013. The goal of SHERPA is to develop a mixed ground and aerial robotic platform to support search and rescue activities in a real-world hostile

  14. Human-machine Interface for Presentation Robot

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Ondroušek, V.

    2012-01-01

    Roč. 6, č. 2 (2012), s. 17-21 ISSN 1897-8649 Institutional research plan: CEZ:AV0Z20760514 Keywords : human-robot interface * mobile robot * presentation robot Subject RIV: JD - Computer Applications, Robotics

  15. Collaborative Robots and Knowledge Management - A Short Review

    Science.gov (United States)

    Mușat, Flaviu-Constantin; Mihu, Florin-Constantin

    2017-12-01

    Because the requirements of the customers are more and more high related to quality, quantity, delivery times at lowest costs possible, the industry had to come with automated solutions to improve these requirements. Starting from the automated lines developed by Ford and Toyota, we have now developed automated and self-sustained working lines, which is possible nowadays-using collaborative robots. By using the knowledge management system we can improve the development of the future of this kind of area of research. This paper shows the benefits and the smartness use of the robots that are performing the manipulation activities that increases the work place ergonomically and improve the interaction between human - machine in order to assist in parallel tasks and lowering the physically human efforts.

  16. Centralised versus Decentralised Control Reconfiguration for Collaborating Underwater Robots

    DEFF Research Database (Denmark)

    Furno, Lidia; Nielsen, Mikkel Cornelius; Blanke, Mogens

    2015-01-01

    The present paper introduces an approach to fault-tolerant reconfiguration for collaborating underwater robots. Fault-tolerant reconfiguration is obtained using the virtual actuator approach, Steen (2005). The paper investigates properties of a centralised versus a decentralised implementation an...... an underwater drill needs to be transported and positioned by three collaborating robots as part of an underwater autonomous operation....

  17. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized...

  18. Skill Based Instruction of Collaborative Robots in Industrial Settings

    DEFF Research Database (Denmark)

    Schou, Casper; Andersen, Rasmus Skovgaard; Chrysostomou, Dimitrios

    2018-01-01

    During the past decades increasing need for more flexible and agile manufacturing equipment has spawned a growing interest in collaborative robots. Contrary to traditional industrial robots, collaborative robots are intended for operating alongside the production personnel in dynamic or semi...... several user studies, the usability of SBS and the task level programming approach has been demonstrated. SBS has been utilized in several international research projects where SBS has been deployed and tested in three real manufacturing settings. Collectively, the industrial exploitations have...

  19. Mentoring console improves collaboration and teaching in surgical robotics.

    Science.gov (United States)

    Hanly, Eric J; Miller, Brian E; Kumar, Rajesh; Hasser, Christopher J; Coste-Maniere, Eve; Talamini, Mark A; Aurora, Alexander A; Schenkman, Noah S; Marohn, Michael R

    2006-10-01

    One of the most significant limitations of surgical robots has been their inability to allow multiple surgeons and surgeons-in-training to engage in collaborative control of robotic surgical instruments. We report the initial experience with a novel two-headed da Vinci surgical robot that has two collaborative modes: the "swap" mode allows two surgeons to simultaneously operate and actively swap control of the robot's four arms, and the "nudge" mode allows them to share control of two of the robot's arms. The utility of the mentoring console operating in its two collaborative modes was evaluated through a combination of dry laboratory exercises and animal laboratory surgery. The results from surgeon-resident collaborative performance of complex three-handed surgical tasks were compared to results from single-surgeon and single-resident performance. Statistical significance was determined using Student's t-test. Collaborative surgeon-resident swap control reduced the time to completion of complex three-handed surgical tasks by 25% compared to single-surgeon operation of a four-armed da Vinci (P nudge mode was particularly useful for guiding a resident's hands during crucially precise steps of an operation (such as proper placement of stitches). The da Vinci mentoring console greatly facilitates surgeon collaboration during robotic surgery and improves the performance of complex surgical tasks. The mentoring console has the potential to improve resident participation in surgical robotics cases, enhance resident education in surgical training programs engaged in surgical robotics, and improve patient safety during robotic surgery.

  20. Velocity-curvature patterns limit human-robot physical interaction.

    Science.gov (United States)

    Maurice, Pauline; Huber, Meghan E; Hogan, Neville; Sternad, Dagmar

    2018-01-01

    Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration.

  1. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  2. Human futures amongst robot teachers?

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Bhroin, Niamh Ni; Ess, Charles Melvin

    2017-01-01

    In 2009 the world’s first robot teacher, Saya, was introduced into a classroom. Saya could express six basic emotions and shout orders like 'be quiet'. Since 2009, instructional robot technologies have emerged around the world and it is estimated that robot teachers may become a regular...... technological feature in the classroom and even 'take over' from human teachers within the next ten to fifteen years.   The paper set out to examine some of the possible ethical implications for human futures in relation to the immanent rise of robot teachers. This is done through combining perspectives...... on technology coming from design, science and technology, education, and philosophy (McCarthy & Wright, 2004; Jasanoff, 2016; Selwyn 2016; Verbeek, 2011). The framework calls attention to how particular robot teachers institute certain educational, experiential and existential terrains within which human...

  3. Mobile Robots in Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    intelligent mobile robotic devices capable of being a more natural and sociable actor in a human environment. More specific the emphasis is on safe and natural motion and navigation issues. First part of the work focus on developing a robotic system, which estimates human interest in interacting......, lawn mowers, toy pets, or as assisting technologies for care giving. If we want robots to be an even larger and more integrated part of our every- day environments, they need to become more intelligent, and behave safe and natural to the humans in the environment. This thesis deals with making...... as being able to navigate safely around one person, the robots must also be able to navigate in environments with more people. This can be environments such as pedestrian streets, hospital corridors, train stations or airports. The developed human-aware navigation strategy is enhanced to formulate...

  4. The Human-Robot Interaction Operating System

    Science.gov (United States)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  5. Additive Manufacturing Cloud via Peer-Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Yuan Yao

    2016-05-01

    Full Text Available When building a 3D printing cloud manufacturing platform, self-sensing and collaboration on manufacturing resources present challenging problems. This paper proposes a peer-robot collaboration framework to deal with these issues. Each robot combines heterogeneous additive manufacturing hardware and software, acting as an intelligent agent. Through collaboration with other robots, it forms a dynamic and scalable integration manufacturing system. The entire distributed system is managed by rules that employ an internal rule engine, which supports rule conversion and conflict resolution. Two additive manufacturing service scenarios are designed to analyse the efficiency and scalability of the framework. Experiments show that the presented method performs well in tasks requiring large-scale access to resources and collaboration.

  6. Collaboration Layer for Robots in Mobile Ad-hoc Networks

    DEFF Research Database (Denmark)

    Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e

    2009-01-01

    networks to solve tasks collaboratively. In this proposal the Collaboration Layer is modelled to handle service and position discovery, group management, and synchronisation among robots, but the layer is also designed to be extendable. Based on this model of the Collaboration Layer, generic services...... are provided to the application running on the robot. The services are generic because they can be used by many different applications, independent of the task to be solved. Likewise, specific services are requested from the underlying Virtual Machine, such as broadcast, multicast, and reliable unicast....... A prototype of the Collaboration Layer has been developed to run in a simulated environment and tested in an evaluation scenario. In the scenario five robots solve the tasks of vacuum cleaning and entrance guarding, which involves the ability to discover potential co-workers, form groups, shift from one group...

  7. Robots and humans: synergy in planetary exploration

    Science.gov (United States)

    Landis, Geoffrey A.

    2004-01-01

    How will humans and robots cooperate in future planetary exploration? Are humans and robots fundamentally separate modes of exploration, or can humans and robots work together to synergistically explore the solar system? It is proposed that humans and robots can work together in exploring the planets by use of telerobotic operation to expand the function and usefulness of human explorers, and to extend the range of human exploration to hostile environments. Published by Elsevier Ltd.

  8. 1st AAU Workshop on Human-Centered Robotics

    DEFF Research Database (Denmark)

    The 2012 AAU Workshop on Human-Centered Robotics took place on 15 Nov. 2012, at Aalborg University, Aalborg. The workshop provides a platform for robotics researchers, including professors, PhD and Master students to exchange their ideas and latest results. The objective is to foster closer...... interaction among researchers from multiple relevant disciplines in the human-centered robotics, and consequently, to promote collaborations across departments of all faculties towards making our center a center of excellence in robotics. The workshop becomes a great success, with 13 presentations, attracting...... more than 45 participants from AAU, SDU, DTI and industrial companies as well. The proceedings contain 7 full papers selected out from the full papers submitted afterwards on the basis of workshop abstracts. The papers represent major research development of robotics at AAU, including medical robots...

  9. Robot learning from human teachers

    CERN Document Server

    Chernova, Sonia

    2014-01-01

    Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn f

  10. Pantomimic gestures for human-robot interaction

    CSIR Research Space (South Africa)

    Burke, Michael G

    2015-10-01

    Full Text Available -1 IEEE TRANSACTIONS ON ROBOTICS 1 Pantomimic Gestures for Human-Robot Interaction Michael Burke, Student Member, IEEE, and Joan Lasenby Abstract This work introduces a pantomimic gesture interface, which classifies human hand gestures using...

  11. The Creation of a Multi-Human, Multi-Robot Interactive Jam Session

    OpenAIRE

    Weinberg, Gil; Blosser, Brian; Mallikarjuna, Trishul; Raman, Aparna

    2009-01-01

    This paper presents an interactive and improvisational jam session, including human players and two robotic musicians. The project was developed in an effort to create novel and inspiring music through human-robot collaboration. The jam session incorporates Shimon, a newly-developed socially-interactive robotic marimba player, and Haile, a perceptual robotic percussionist developed in previous work. The paper gives an overview of the musical perception modules, adaptive improvisation modes an...

  12. Safe human-robot cooperation in an industrial environment

    OpenAIRE

    Pedrocchi N.; Vicentini F.; Matteo M.; Tosatti L.M.

    2013-01-01

    The standard EN ISO10218 is fostering the implementation of hybrid production systems, i.e., production systems characterized by a close relationship among human operators and robots in cooperative tasks. Human‐robot hybrid systems could have a big economic benefit in small and medium sized production, even if this new paradigm introduces mandatory, challenging safety aspects. Among various requirements for collaborative workspaces, safety‐assurance involves two different application layers; ...

  13. Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle

    OpenAIRE

    Weistroffer , Vincent

    2014-01-01

    Either in the context of the industry or of the everyday life, robots are becoming more and more present in our environment and are nowadays able to interact with humans. In industrial environments, robots now assist operators on the assembly lines for difficult and dangerous tasks. Then, robots and operators need to share the same physical space (copresence) and to manage common tasks (collaboration). On the one side, the safety of humans working near robots has to be guaranteed at all time....

  14. Safe Human-Robot Cooperation in an Industrial Environment

    Directory of Open Access Journals (Sweden)

    Nicola Pedrocchi

    2013-01-01

    Full Text Available The standard EN ISO10218 is fostering the implementation of hybrid production systems, i.e., production systems characterized by a close relationship among human operators and robots in cooperative tasks. Human-robot hybrid systems could have a big economic benefit in small and medium sized production, even if this new paradigm introduces mandatory, challenging safety aspects. Among various requirements for collaborative workspaces, safety-assurance involves two different application layers; the algorithms enabling safe space-sharing between humans and robots and the enabling technologies allowing acquisition data from sensor fusion and environmental data analysing. This paper addresses both the problems: a collision avoidance strategy allowing on-line re-planning of robot motion and a safe network of unsafe devices as a suggested infrastructure for functional safety achievement.

  15. Emotion based human-robot interaction

    Directory of Open Access Journals (Sweden)

    Berns Karsten

    2018-01-01

    Full Text Available Human-machine interaction is a major challenge in the development of complex humanoid robots. In addition to verbal communication the use of non-verbal cues such as hand, arm and body gestures or mimics can improve the understanding of the intention of the robot. On the other hand, by perceiving such mechanisms of a human in a typical interaction scenario the humanoid robot can adapt its interaction skills in a better way. In this work, the perception system of two social robots, ROMAN and ROBIN of the RRLAB of the TU Kaiserslautern, is presented in the range of human-robot interaction.

  16. Human centric object perception for service robots

    NARCIS (Netherlands)

    Alargarsamy Balasubramanian, A.C.

    2016-01-01

    The research interests and applicability of robotics have diversified and seen a
    tremendous growth in recent years. There has been a shift from industrial robots operating in constrained settings to consumer robots working in dynamic environments associated closely with everyday human

  17. The ethics of human-robot relationships

    NARCIS (Netherlands)

    de Graaf, M.M.A.

    2015-01-01

    Currently, human-robot interactions are constructed according to the rules of human-human interactions inviting users to interact socially with robots. Is there something morally wrong with deceiving humans into thinking they can foster meaningful interactions with a technological object? Or is this

  18. Human-Robot Interaction and Human Self-Realization

    DEFF Research Database (Denmark)

    Nørskov, Marco

    2014-01-01

    is to test the basis for this type of discrimination when it comes to human-robot interaction. Furthermore, the paper will take Heidegger's warning concerning technology as a vantage point and explore the possibility of human-robot interaction forming a praxis that might help humans to be with robots beyond...

  19. Towards the Verification of Human-Robot Teams

    Science.gov (United States)

    Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.

    2005-01-01

    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.

  20. Designing, developing, and deploying systems to support human-robot teams in disaster response

    NARCIS (Netherlands)

    Kruijff, G.J.M.; Kruijff-Korbayová, I.; Keshavdas, S.; Larochelle, B.; Janíček, M.; Colas, F.; Liu, M.; Pomerleau, F.; Siegwart, R.; Neerincx, M.A.; Looije, R.; Smets, N.J.J.M.; Mioch, T.; Diggelen, J. van; Pirri, F.; Gianni, M.; Ferri, F.; Menna, M.; Worst, R.; Linder, T.; Tretyakov, V.; Surmann, H.; Svoboda, T.; Reinštein, M.; Zimmermann, K.; Petříček, T.; Hlaváč, V.

    2014-01-01

    This paper describes our experience in designing, developing and deploying systems for supporting human-robot teams during disaster response. It is based on R&D performed in the EU-funded project NIFTi. NIFTi aimed at building intelligent, collaborative robots that could work together with humans in

  1. A Distributed Tactile Sensor for Intuitive Human-Robot Interfacing

    Directory of Open Access Journals (Sweden)

    Andrea Cirillo

    2017-01-01

    Full Text Available Safety of human-robot physical interaction is enabled not only by suitable robot control strategies but also by suitable sensing technologies. For example, if distributed tactile sensors were available on the robot, they could be used not only to detect unintentional collisions, but also as human-machine interface by enabling a new mode of social interaction with the machine. Starting from their previous works, the authors developed a conformable distributed tactile sensor that can be easily conformed to the different parts of the robot body. Its ability to estimate contact force components and to provide a tactile map with an accurate spatial resolution enables the robot to handle both unintentional collisions in safe human-robot collaboration tasks and intentional touches where the sensor is used as human-machine interface. In this paper, the authors present the characterization of the proposed tactile sensor and they show how it can be also exploited to recognize haptic tactile gestures, by tailoring recognition algorithms, well known in the image processing field, to the case of tactile images. In particular, a set of haptic gestures has been defined to test three recognition algorithms on a group of 20 users. The paper demonstrates how the same sensor originally designed to manage unintentional collisions can be successfully used also as human-machine interface.

  2. Human-Robot Teams for Unknown and Uncertain Environments

    Science.gov (United States)

    Fong, Terry

    2015-01-01

    Man-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence.

  3. Framework to Implement Collaborative Robots in Manual Assembly: A Lean Automation Approach

    DEFF Research Database (Denmark)

    Malik, Ali Ahmad; Bilberg, Arne

    The recent proliferation of smart manufacturing technologies has emerged the concept of hybrid automation for assembly systems utilizing the best of humans and robots in a combination. Based on the ability to work alongside human-workers the next generation of industrial robots (or robotics 2...... of virtual simulations is discussed for validation and optimization of human-robot work environment....

  4. Social robots from a human perspective

    CERN Document Server

    Taipale, Sakari; Sapio, Bartolomeo; Lugano, Giuseppe; Fortunati, Leopoldina

    2015-01-01

    Addressing several issues that explore the human side of social robots, this book asks from a social and human scientific perspective what a social robot is and how we might come to think about social robots in the different areas of everyday life. Organized around three sections that deal with Perceptions and Attitudes to Social Robots, Human Interaction with Social Robots, and Social Robots in Everyday Life, the book explores the idea that even if technical problems related to robot technologies can be continuously solved from a machine perspective, what kind of machine do we want to have and use in our daily lives? Experiences from previously widely adopted technologies, such smartphones, hint that robot technologies could potentially be absorbed into the everyday lives of humans in such a way that it is the human that determines the human-machine interaction. In a similar way to how today’s information and communication technologies were first designed for professional/industrial use, but which soon wer...

  5. Automation and robotics human performance

    Science.gov (United States)

    Mah, Robert W.

    1990-01-01

    The scope of this report is limited to the following: (1) assessing the feasibility of the assumptions for crew productivity during the intra-vehicular activities and extra-vehicular activities; (2) estimating the appropriate level of automation and robotics to accomplish balanced man-machine, cost-effective operations in space; (3) identifying areas where conceptually different approaches to the use of people and machines can leverage the benefits of the scenarios; and (4) recommending modifications to scenarios or developing new scenarios that will improve the expected benefits. The FY89 special assessments are grouped into the five categories shown in the report. The high level system analyses for Automation & Robotics (A&R) and Human Performance (HP) were performed under the Case Studies Technology Assessment category, whereas the detailed analyses for the critical systems and high leverage development areas were performed under the appropriate operations categories (In-Space Vehicle Operations or Planetary Surface Operations). The analysis activities planned for the Science Operations technology areas were deferred to FY90 studies. The remaining activities such as analytic tool development, graphics/video demonstrations and intelligent communicating systems software architecture were performed under the Simulation & Validations category.

  6. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  7. A Collaborative Approach for Surface Inspection Using Aerial Robots and Computer Vision

    Directory of Open Access Journals (Sweden)

    Martin Molina

    2018-03-01

    Full Text Available Aerial robots with cameras on board can be used in surface inspection to observe areas that are difficult to reach by other means. In this type of problem, it is desirable for aerial robots to have a high degree of autonomy. A way to provide more autonomy would be to use computer vision techniques to automatically detect anomalies on the surface. However, the performance of automated visual recognition methods is limited in uncontrolled environments, so that in practice it is not possible to perform a fully automatic inspection. This paper presents a solution for visual inspection that increases the degree of autonomy of aerial robots following a semi-automatic approach. The solution is based on human-robot collaboration in which the operator delegates tasks to the drone for exploration and visual recognition and the drone requests assistance in the presence of uncertainty. We validate this proposal with the development of an experimental robotic system using the software framework Aerostack. The paper describes technical challenges that we had to solve to develop such a system and the impact on this solution on the degree of autonomy to detect anomalies on the surface.

  8. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    Directory of Open Access Journals (Sweden)

    BIROUAS Flaviu Ionut

    2016-11-01

    Full Text Available This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  9. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    OpenAIRE

    BIROUAS Flaviu Ionut; NILGESZ Arnold

    2016-01-01

    This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  10. From robot to human grasping simulation

    CERN Document Server

    León, Beatriz; Sancho-Bru, Joaquin

    2013-01-01

    The human hand and its dexterity in grasping and manipulating objects are some of the hallmarks of the human species. For years, anatomic and biomechanical studies have deepened the understanding of the human hand’s functioning and, in parallel, the robotics community has been working on the design of robotic hands capable of manipulating objects with a performance similar to that of the human hand. However, although many researchers have partially studied various aspects, to date there has been no comprehensive characterization of the human hand’s function for grasping and manipulation of

  11. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko

    2011-01-01

    Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed

  12. User localization during human-robot interaction.

    Science.gov (United States)

    Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A

    2012-01-01

    This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.

  13. A Preliminary Study of Peer-to-Peer Human-Robot Interaction

    Science.gov (United States)

    Fong, Terrence; Flueckiger, Lorenzo; Kunz, Clayton; Lees, David; Schreiner, John; Siegel, Michael; Hiatt, Laura M.; Nourbakhsh, Illah; Simmons, Reid; Ambrose, Robert

    2006-01-01

    The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.

  14. Humanlike Robots - Synthetically Mimicking Humans

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2012-01-01

    Nature inspired many inventions and the field of technology that is based on the mimicking or inspiration of nature is widely known as Biomimetics and it is increasingly leading to many new capabilities. There are numerous examples of biomimetic successes including the copying of fins for swimming, and the inspiration of the insects and birds flight. More and more commercial implementations of biomimetics are appearing and behaving lifelike and applications are emerging that are important to our daily life. Making humanlike robots is the ultimate challenge to biomimetics and, for many years, it was considered science fiction, but such robots are becoming an engineering reality. Advances in producing such robot are allowing them to perform impressive functions and tasks. The development of such robots involves addressing many challenges and is raising concerns that are related to fear of their application implications and potential ethical issues. In this paper, the state-of-the-art of humanlike robots, potential applications and challenges will be reviewed.

  15. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    Directory of Open Access Journals (Sweden)

    Fahed Awad

    2018-01-01

    Full Text Available Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  16. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    Science.gov (United States)

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  17. Sex Robots: Between Human and Artificial

    OpenAIRE

    Richardson, Kathleen

    2017-01-01

    Despite a surplus of human beings in the world, new estimates total 7 and a half billion, we appear to be at the start of an attachment crisis - a crisis in how human beings make intimate relationships. Enter the sex robots, built out of the bodies of sex dolls to help humans, particularly males escape their inability to connect. What does the rise of sex robots tell us about the way that women and girls are imagined, are they persons or property? And to what extent is porn, prostitution and ...

  18. Generating human-like movements on an anthropomorphic robot using an interior point method

    Science.gov (United States)

    Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.

    2013-10-01

    In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.

  19. Interactions between Humans and Robots

    DEFF Research Database (Denmark)

    Vlachos, Evgenios; Schärfe, Henrik

    2013-01-01

    ), and explains the relationships and dependencies that exist between them. The four main factors that define the properties of a robot, and therefore the interaction, are distributed in two dimensions: (1) Intelligence (Control - Autonomy), and (2) Perspective (Tool - Medium). Based on these factors, we...

  20. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Directory of Open Access Journals (Sweden)

    Addythia Saphala

    2015-10-01

    Full Text Available Robotic Operation System (ROS is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR, which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM.

  1. The future African workplace: The use of collaborative robots in manufacturing

    Directory of Open Access Journals (Sweden)

    Andre P. Calitz

    2017-07-01

    Full Text Available Orientation: Industry 4.0 promotes technological innovations and human–robot collaboration (HRC. Human–robot interaction (HRI and HRC on the manufacturing assembly line have been implemented in numerous advanced production environments worldwide. Collaborative robots (Cobots are increasingly being used as collaborators with humans in factory production and assembly environments. Research purpose: The purpose of the research is to investigate the current use and future implementation of Cobots worldwide and its specific impact on the African workforce. Motivation for the study: Exploring the gap that exists between the international implementation of Cobots and the potential implementation and impact on the African manufacturing and assembly environment and specifically on the African workforce. Research design, approach and method: The study features a qualitative research design. An open-ended question survey was conducted amongst leading manufacturing companies in South Africa in order to determine the status and future implementation of Cobot practices. Thematic analysis and content analysis were conducted using AtlasTi. Main findings: The findings indicate that the African businesses were aware of the international business trends, regarding Cobot implementation, and the possible impact of Cobots on the African work force. Factors specifically highlighted in this study are fear of retrenchment, human–Cobot trust and the African culture. Practical implications and value-add: This study provides valuable background on the international status of Cobot implementation and the possible impact on the African workforce. The study highlights the importance of building employee trust, providing the relevant training and addressing the fear of retrenchment amongst employees.

  2. Effect of cognitive biases on human-robot interaction: a case study of robot's misattribution

    OpenAIRE

    Biswas, Mriganka; Murray, John

    2014-01-01

    This paper presents a model for developing long-term human-robot interactions and social relationships based on the principle of 'human' cognitive biases applied to a robot. The aim of this work is to study how a robot influenced with human ‘misattribution’ helps to build better human-robot interactions than unbiased robots. The results presented in this paper suggest that it is important to know the effect of cognitive biases in human characteristics and interactions in order to better u...

  3. Towards collaboration between professional caregivers and robots - A preliminary study

    OpenAIRE

    Malaisé , Adrien; Nertomb , Sophie; Charpillet , François; Ivaldi , Serena

    2016-01-01

    International audience; In this paper, we address the question of which potential use of a robot in a health-care environment is imagined by people that are not experts in robotics, and how these people imagine to teach new movements to a robot. We report on the preliminary results of our investigation , in which we conducted 40 interviews with non-experts in robotics and a focus group with professional caregivers.

  4. Exploring child-robot engagement in a collaborative task

    NARCIS (Netherlands)

    Zaga, Cristina; Truong, Khiet Phuong; Lohse, M.; Evers, Vanessa

    Imagine a room with toys scattered on the floor and a robot that is motivating a small group of children to tidy up. This scenario poses real-world challenges for the robot, e.g., the robot needs to navigate autonomously in a cluttered environment, it needs to classify and grasp objects, and it

  5. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of

  6. Human-Robot Teamwork in USAR Environments: The TRADR Project

    NARCIS (Netherlands)

    Greeff, J. de; Hindriks, K.; Neerincx, M.A.; Kruijff-Korbayova, I.

    2015-01-01

    The TRADR project aims at developing methods and models for human-robot teamwork, enabling robots to operate in search and rescue environments alongside humans as teammates, rather than as tools. Through a user-centered cognitive engineering method, human-robot teamwork is analyzed, modeled,

  7. Human Factors and Robotics: Current Status and Future Prospects.

    Science.gov (United States)

    Parsons, H. McIlvaine; Kearsley, Greg P.

    The principal human factors engineering issue in robotics is the division of labor between automation (robots) and human beings. This issue reflects a prime human factors engineering consideration in systems design--what equipment should do and what operators and maintainers should do. Understanding of capabilities and limitations of robots and…

  8. Socially Impaired Robots: Human Social Disorders and Robots' Socio-Emotional Intelligence

    OpenAIRE

    Vitale, Jonathan; Williams, Mary-Anne; Johnston, Benjamin

    2016-01-01

    Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots' future capability requirements. We provide evidence supporting...

  9. Robotics-based synthesis of human motion

    KAUST Repository

    Khatib, O.; Demircan, E.; De Sapio, V.; Sentis, L.; Besier, T.; Delp, S.

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  10. Robotics-based synthesis of human motion

    KAUST Repository

    Khatib, O.

    2009-05-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  11. Next generation light robotic

    DEFF Research Database (Denmark)

    Villangca, Mark Jayson; Palima, Darwin; Banas, Andrew Rafael

    2017-01-01

    -assisted surgery imbibes surgeons with superhuman abilities and gives the expression “surgical precision” a whole new meaning. Still in its infancy, much remains to be done to improve human-robot collaboration both in realizing robots that can operate safely with humans and in training personnel that can work......Conventional robotics provides machines and robots that can replace and surpass human performance in repetitive, difficult, and even dangerous tasks at industrial assembly lines, hazardous environments, or even at remote planets. A new class of robotic systems no longer aims to replace humans...... with so-called automatons but, rather, to create robots that can work alongside human operators. These new robots are intended to collaborate with humans—extending their abilities—from assisting workers on the factory floor to rehabilitating patients in their homes. In medical robotics, robot...

  12. From Human-Computer Interaction to Human-Robot Social Interaction

    OpenAIRE

    Toumi, Tarek; Zidani, Abdelmadjid

    2014-01-01

    Human-Robot Social Interaction became one of active research fields in which researchers from different areas propose solutions and directives leading robots to improve their interactions with humans. In this paper we propose to introduce works in both human robot interaction and human computer interaction and to make a bridge between them, i.e. to integrate emotions and capabilities concepts of the robot in human computer model to become adequate for human robot interaction and discuss chall...

  13. Remote radiation mapping and preliminary intervention using collaborating (European and Russian) mobile robots

    International Nuclear Information System (INIS)

    Piotrowski, L.; Trouville, B.; Halbach, M.; Sidorkin, N.

    1996-12-01

    The primary objective of the IMPACT project is to develop a light-weight and inexpensive mobile robot that can be used for rapid inspection missions within nuclear power plants. These interventions are to cover normal, incident and accident situations and aim at primary reconnaissance (or 'data collecting') missions. The IMPACT robot was demonstrated (April 1996) in a realistic mission at the Russian nuclear plant SMOLENSK. The demonstration, composed of 2 independent but consecutive missions, was held in a radioactive zone near turbine ≠ 4 of Unit 2: remote radiation mapping with localisation of radioactive sources by the IMPACT robot equipped with a (Russian) gamma-radiation sensor; deployment of a Russian intervention robot for the construction of a protective lead shield around one of the identified sources and verification that the ambient radiation level has been reduce. This mission was executed remotely by 2 mobile robots working in collaboration: a NIKIMT robot equipped with a manipulator arm and carrying leads bricks and the IMPACT robot of mission I (radiation measurements and 'side-observer'). This manuscript describes (a) the technical characteristics of the IMPACT reconnaissance robot (3-segmented, caterpillar-tracked body; 6 video cameras placed around the mobile platform with simultaneous presentation of up to 4 video images at the control post; ability to detach remotely one of the robot's segments (i.e. the robot can divide itself into 2 separate mobile robots)) and (b) the SMOLENSK demonstration. (author)

  14. Robotic hip arthroscopy in human anatomy.

    Science.gov (United States)

    Kather, Jens; Hagen, Monika E; Morel, Philippe; Fasel, Jean; Markar, Sheraz; Schueler, Michael

    2010-09-01

    Robotic technology offers technical advantages that might offer new solutions for hip arthroscopy. Two hip arthroscopies were performed in human cadavers using the da Vinci surgical system. During both surgeries, a robotic camera and 5 or 8 mm da Vinci trocars with instruments were inserted into the hip joint for manipulation. Introduction of cameras and working instruments, docking of the robotic system and instrument manipulation was successful in both cases. The long articulating area of 5 mm instruments limited movements inside the joint; an 8 mm instrument with a shorter area of articulation offered an improved range of motion. Hip arthroscopy using the da Vinci standard system appears a feasible alternative to standard arthroscopy. Instruments and method of application must be modified and improved before routine clinical application but further research in this area seems justified, considering the clinical value of such an approach. Copyright 2010 John Wiley & Sons, Ltd.

  15. Human-Robot Teams Informed by Human Performance Moderator Functions

    Science.gov (United States)

    2012-08-29

    performance factors that affect the ability of a human to drive at night, which includes the eyesight of the driver, the fatigue level of the driver...where human factors are factors that affect the performance of an individual. 7 for human interaction. For instance, they explain the various human... affecting trust in human-robot interaction. Human Factors 53(5), 517-527 (2001) 35. Hart, S. G. and Staveland, L. E. Development of NASA-TLX (Task

  16. Human Robotic Systems (HRS): Controlling Robots over Time Delay Element

    Data.gov (United States)

    National Aeronautics and Space Administration — This element involves the development of software that enables easier commanding of a wide range of NASA relevant robots through the Robot Application Programming...

  17. The Architecture of Children's Use of Language and Tools When Problem Solving Collaboratively with Robotics

    Science.gov (United States)

    Mills, Kathy A.; Chandra, Vinesh; Park, Ji Yong

    2013-01-01

    This paper demonstrates, following Vygotsky, that language and tool use has a critical role in the collaborative problem-solving behaviour of school-age children. It reports original ethnographic classroom research examining the convergence of speech and practical activity in children's collaborative problem solving with robotics programming…

  18. Group Tasks, Activities, Dynamics, and Interactions in Collaborative Robotics Projects with Elementary and Middle School Children

    Science.gov (United States)

    Yuen, Timothy T.; Boecking, Melanie; Stone, Jennifer; Tiger, Erin Price; Gomez, Alvaro; Guillen, Adrienne; Arreguin, Analisa

    2014-01-01

    Robotics provide the opportunity for students to bring their individual interests, perspectives and areas of expertise together in order to work collaboratively on real-world science, technology, engineering and mathematics (STEM) problems. This paper examines the nature of collaboration that manifests in groups of elementary and middle school…

  19. Human-robot interaction strategies for walker-assisted locomotion

    CERN Document Server

    Cifuentes, Carlos A

    2016-01-01

    This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed. .

  20. Accelerating Robot Development through Integral Analysis of Human-Robot Interaction

    NARCIS (Netherlands)

    Kooijmans, T.; Kanda, T.; Bartneck, C.; Ishiguro, H.; Hagita, N.

    2007-01-01

    The development of interactive robots is a complicated process, involving a plethora of psychological, technical, and contextual influences. To design a robot capable of operating "intelligently" in everyday situations, one needs a profound understanding of human-robot interaction (HRI). We propose

  1. Multi-Axis Force Sensor for Human-Robot Interaction Sensing in a Rehabilitation Robotic Device.

    Science.gov (United States)

    Grosu, Victor; Grosu, Svetlana; Vanderborght, Bram; Lefeber, Dirk; Rodriguez-Guerrero, Carlos

    2017-06-05

    Human-robot interaction sensing is a compulsory feature in modern robotic systems where direct contact or close collaboration is desired. Rehabilitation and assistive robotics are fields where interaction forces are required for both safety and increased control performance of the device with a more comfortable experience for the user. In order to provide an efficient interaction feedback between the user and rehabilitation device, high performance sensing units are demanded. This work introduces a novel design of a multi-axis force sensor dedicated for measuring pelvis interaction forces in a rehabilitation exoskeleton device. The sensor is conceived such that it has different sensitivity characteristics for the three axes of interest having also movable parts in order to allow free rotations and limit crosstalk errors. Integrated sensor electronics make it easy to acquire and process data for a real-time distributed system architecture. Two of the developed sensors are integrated and tested in a complex gait rehabilitation device for safe and compliant control.

  2. Movement coordination in applied human-human and human-robot interaction

    DEFF Research Database (Denmark)

    Schubö, Anna; Vesper, Cordula; Wiesbeck, Mathey

    2007-01-01

    and describing human-human interaction in terms of goal-oriented movement coordination is considered an important and necessary step for designing and describing human-robot interaction. In the present scenario, trajectories of hand and finger movements were recorded while two human participants performed......The present paper describes a scenario for examining mechanisms of movement coordination in humans and robots. It is assumed that coordination can best be achieved when behavioral rules that shape movement execution in humans are also considered for human-robot interaction. Investigating...... coordination were affected. Implications for human-robot interaction are discussed....

  3. Human-Robot Teaming in a Multi-Agent Space Assembly Task

    Science.gov (United States)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.

  4. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  5. Multiagent Modeling and Simulation in Human-Robot Mission Operations Work System Design

    Science.gov (United States)

    Sierhuis, Maarten; Clancey, William J.; Sims, Michael H.; Shafto, Michael (Technical Monitor)

    2001-01-01

    This paper describes a collaborative multiagent modeling and simulation approach for designing work systems. The Brahms environment is used to model mission operations for a semi-autonomous robot mission to the Moon at the work practice level. It shows the impact of human-decision making on the activities and energy consumption of a robot. A collaborative work systems design methodology is described that allows informal models, created with users and stakeholders, to be used as input to the development of formal computational models.

  6. Intelligence for Human-Assistant Planetary Surface Robots

    Science.gov (United States)

    Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.

    2006-01-01

    The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.

  7. Common Metrics for Human-Robot Interaction

    Science.gov (United States)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  8. Collaborative Research in the Digital Humanities

    CERN Document Server

    Deegan, Marilyn

    2012-01-01

    Collaboration within digital humanities is both a pertinent and a pressing topic as the traditional mode of the humanist, working alone in his or her study, is supplemented by explicitly co-operative, interdependent and collaborative research. This is particularly true where computational methods are employed in large-scale digital humanities projects. This book, which celebrates the contributions of Harold Short to this field, presents fourteen essays by leading authors in the digital humanities. It addresses several issues of collaboration, from the multiple perspectives of institutions, pro

  9. Next Generation Simulation Framework for Robotic and Human Space Missions

    Science.gov (United States)

    Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven

    2012-01-01

    The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.

  10. Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2009-01-01

    Abstract—This paper introduces a new method to determine a person’s pose based on laser range measurements. Such estimates are typically a prerequisite for any human-aware robot navigation, which is the basis for effective and timeextended interaction between a mobile robot and a human. The robot......’s pose. The resulting pose estimates are used to identify humans who wish to be approached and interacted with. The interaction motion of the robot is based on adaptive potential functions centered around the person that respect the persons social spaces. The method is tested in experiments...

  11. Robot, human and communication; Robotto/ningen/comyunikeshon

    Energy Technology Data Exchange (ETDEWEB)

    Suehiro, T.

    1996-04-10

    Recently, some interests on the robots working with human beings under the same environment as the human beings and living with the human beings were promoting. In such robots, more suitability for environment and more robustness of system are required than those in conventional robots. Above all, communication of both the human beings and the robots on their cooperations is becoming a new problem. Hitherto, for the industrial robot, cooperation between human beings and robot was limited on its programming. As this was better for repeated operation of the same motion, its adoptable work was limited to some comparatively simpler one in factory and was difficult to change its content partially or to apply the other work. Furthermore, on the remote-controlled intelligent work robot represented by the critical work robot, its cooperation between the human beings and the robot can be conducted with the operation at remote location. In this paper, the communication of the robots lived with the human beings was examined. 17 refs., 1 fig.

  12. Multimodal interaction for human-robot teams

    Science.gov (United States)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  13. Companies' human capital required for collaboration

    DEFF Research Database (Denmark)

    Albats, Ekaterina; Bogers, Marcel; Podmetina, Daria

    building, relationship building, IPR management and negotiation for the context of collaboration with universities. Our research has revealed an importance of expectation management skills for university-industry collaboration (UIC) context. We found that human capital for UIC is to be continuously......Universities are widely acknowledged as an important source of knowledge for corporate innovation, and collaboration with universities plays an important role in companies’ open innovation strategy. However, little is known about the human capital components required for collaboration...... with universities. Analysing the results of the survey among over 500 company managers we define the universal employees’ skills required for company’ successful collaborations with external stakeholders. Then through analysing qualitative interviews data we distinguish between these skills and capabilities...

  14. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    Science.gov (United States)

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  15. Human motion behavior while interacting with an industrial robot.

    Science.gov (United States)

    Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus

    2012-01-01

    Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.

  16. Robots as Imagined in the Television Series Humans.

    Science.gov (United States)

    Wicclair, Mark R

    2018-07-01

    Humans is a science fiction television series set in what appears to be present-day London. What makes it science fiction is that in London and worldwide, there are robots that look like humans and can mimic human behavior. The series raises several important ethical and philosophical questions about artificial intelligence and robotics, which should be of interest to bioethicists.

  17. Effects of Robot Facial Characteristics and Gender in Persuasive Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Aimi S. Ghazali

    2018-06-01

    Full Text Available The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behavior was logged, and trust and reactance toward the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans is more persuasive, evokes more trust, and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans. Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest robots that are intended to influence human behavior should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot.

  18. The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.

    Science.gov (United States)

    Arnold, Thomas; Scheutz, Matthias

    2017-06-01

    Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.

  19. Modeling Leadership Styles in Human-Robot Team Dynamics

    Science.gov (United States)

    Cruz, Gerardo E.

    2005-01-01

    The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.

  20. Negative Affect in Human Robot Interaction

    DEFF Research Database (Denmark)

    Rehm, Matthias; Krogsager, Anders

    2013-01-01

    The vision of social robotics sees robots moving more and more into unrestricted social environments, where robots interact closely with users in their everyday activities, maybe even establishing relationships with the user over time. In this paper we present a field trial with a robot in a semi...

  1. Human Robotic Systems (HRS): Robotic ISRU Acquisition Element

    Data.gov (United States)

    National Aeronautics and Space Administration — During 2014, the Robotic ISRU Resource Acquisition project element will develop two technologies:Exploration Ground Data Systems (xGDS)Sample Acquisition on...

  2. Using Empathy to Improve Human-Robot Relationships

    Science.gov (United States)

    Pereira, André; Leite, Iolanda; Mascarenhas, Samuel; Martinho, Carlos; Paiva, Ana

    For robots to become our personal companions in the future, they need to know how to socially interact with us. One defining characteristic of human social behaviour is empathy. In this paper, we present a robot that acts as a social companion expressing different kinds of empathic behaviours through its facial expressions and utterances. The robot comments the moves of two subjects playing a chess game against each other, being empathic to one of them and neutral towards the other. The results of a pilot study suggest that users to whom the robot was empathic perceived the robot more as a friend.

  3. Self-Organization and Human Robots

    Directory of Open Access Journals (Sweden)

    Chris Lucas

    2008-11-01

    Full Text Available Humans are rather funny things, we often tend to imagine that we are so ?special?, so divorced by our supposed ?intelligence? from the influences of the ?natural world? and so unique in our ?abstracting? abilities. We have this persistent delusion, evident since ancient Greek times, that we are ?rational?, that we can behave as ?disinterested observers? of our world, which manifests in AI thought today in a belief that, in a like manner, we can ?design?, God like, from afar, our replacements, those ?super-robots? that will do everything that we can imagine doing, but in much ?better? ways than we can achieve, and yet can avoid doing anything ?nasty?, i.e. can overcome our many human failings - obeying, I suppose, in the process, Asimov?s three ?laws of robotics?. Such human naiveté proves, in fact, to be quite amusing, at least to those of us ?schooled? in AI history. When we look at the aspirations and the expectations of our early ?pioneers?, and compare them to the actual reality of today, then we must, it seems, re-discover the meaning of the word ?humility?. Enthusiasm, good as it may be, needs to be moderated with a touch of ?common sense?, and if our current ways of doing things in our AI world don?t really work as we had hoped, then perhaps it is time to try something different (Lucas, C., 1999?

  4. Forming Human-Robot Teams Across Time and Space

    Science.gov (United States)

    Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.

    2012-01-01

    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot

  5. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot

    Directory of Open Access Journals (Sweden)

    Nicole Mirnig

    2017-05-01

    Full Text Available We conducted a user study for which we purposefully programmed faulty behavior into a robot’s routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human–robot interaction. In contrast to our previous work, where we studied video material in which robot errors occurred unintentionally, in the herein reported user study, we purposefully elicited robot errors to further explore the human interaction partners’ social signals following a robot error. Our participants interacted with a human-like NAO, and the robot either performed faulty or free from error. First, the robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. After the interaction, we asked the participants to rate the robot’s anthropomorphism, likability, and perceived intelligence. We also interviewed the participants on their opinion about the interaction. Additionally, we video-coded the social signals the participants showed during their interaction with the robot as well as the answers they provided the robot with. Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly. We did not find significant differences in people’s ratings of the robot’s anthropomorphism and perceived intelligence. The qualitative data confirmed the questionnaire results in showing that although the participants recognized the robot’s mistakes, they did not necessarily reject the erroneous robot. The annotations of the video data further showed that gaze shifts (e.g., from an object to the robot or vice versa and laughter are typical reactions to unexpected robot behavior

  6. New trends in medical and service robots human centered analysis, control and design

    CERN Document Server

    Chevallereau, Christine; Pisla, Doina; Bleuler, Hannes; Rodić, Aleksandar

    2016-01-01

    Medical and service robotics integrates several disciplines and technologies such as mechanisms, mechatronics, biomechanics, humanoid robotics, exoskeletons, and anthropomorphic hands. This book presents the most recent advances in medical and service robotics, with a stress on human aspects. It collects the selected peer-reviewed papers of the Fourth International Workshop on Medical and Service Robots, held in Nantes, France in 2015, covering topics on: exoskeletons, anthropomorphic hands, therapeutic robots and rehabilitation, cognitive robots, humanoid and service robots, assistive robots and elderly assistance, surgical robots, human-robot interfaces, BMI and BCI, haptic devices and design for medical and assistive robotics. This book offers a valuable addition to existing literature.

  7. Safe physical human robot interaction- past, present and future

    International Nuclear Information System (INIS)

    Pervez, Aslam; Ryu, Jeha

    2008-01-01

    When a robot physically interacts with a human user, the requirements should be drastically changed. The most important requirement is the safety of the human user in the sense that robot should not harm the human in any situation. During the last few years, research has been focused on various aspects of safe physical human robot interaction. This paper provides a review of the work on safe physical interaction of robotic systems sharing their workspace with human users (especially elderly people). Three distinct areas of research are identified: interaction safety assessment, interaction safety through design, and interaction safety through planning and control. The paper then highlights the current challenges and available technologies and points out future research directions for realization of a safe and dependable robotic system for human users

  8. Robots and Humans in Planetary Exploration: Working Together?

    Science.gov (United States)

    Landis, Geoffrey A.; Lyons, Valerie (Technical Monitor)

    2002-01-01

    Today's approach to human-robotic cooperation in planetary exploration focuses on using robotic probes as precursors to human exploration. A large portion of current NASA planetary surface exploration is focussed on Mars, and robotic probes are seen as precursors to human exploration in: Learning about operation and mobility on Mars; Learning about the environment of Mars; Mapping the planet and selecting landing sites for human mission; Demonstration of critical technology; Manufacture fuel before human presence, and emplace elements of human-support infrastructure

  9. Learning Human Aspects of Collaborative Software Development

    Science.gov (United States)

    Hadar, Irit; Sherman, Sofia; Hazzan, Orit

    2008-01-01

    Collaboration has become increasingly widespread in the software industry as systems have become larger and more complex, adding human complexity to the technological complexity already involved in developing software systems. To deal with this complexity, human-centric software development methods, such as Extreme Programming and other agile…

  10. The human hand as an inspiration for robot hand development

    CERN Document Server

    Santos, Veronica

    2014-01-01

    “The Human Hand as an Inspiration for Robot Hand Development” presents an edited collection of authoritative contributions in the area of robot hands. The results described in the volume are expected to lead to more robust, dependable, and inexpensive distributed systems such as those endowed with complex and advanced sensing, actuation, computation, and communication capabilities. The twenty-four chapters discuss the field of robotic grasping and manipulation viewed in light of the human hand’s capabilities and push the state-of-the-art in robot hand design and control. Topics discussed include human hand biomechanics, neural control, sensory feedback and perception, and robotic grasp and manipulation. This book will be useful for researchers from diverse areas such as robotics, biomechanics, neuroscience, and anthropologists.

  11. ROBOT LEARNING OF OBJECT MANIPULATION TASK ACTIONS FROM HUMAN DEMONSTRATIONS

    Directory of Open Access Journals (Sweden)

    Maria Kyrarini

    2017-08-01

    Full Text Available Robot learning from demonstration is a method which enables robots to learn in a similar way as humans. In this paper, a framework that enables robots to learn from multiple human demonstrations via kinesthetic teaching is presented. The subject of learning is a high-level sequence of actions, as well as the low-level trajectories necessary to be followed by the robot to perform the object manipulation task. The multiple human demonstrations are recorded and only the most similar demonstrations are selected for robot learning. The high-level learning module identifies the sequence of actions of the demonstrated task. Using Dynamic Time Warping (DTW and Gaussian Mixture Model (GMM, the model of demonstrated trajectories is learned. The learned trajectory is generated by Gaussian mixture regression (GMR from the learned Gaussian mixture model.  In online working phase, the sequence of actions is identified and experimental results show that the robot performs the learned task successfully.

  12. Robotics Collaborative Technology Alliance (RCTA) 2011 Baseline Assessment Experimental Strategy

    Science.gov (United States)

    2011-09-01

    Traversal over different surfaces— climbing stairs , negotiating ditches, etc.—is an inherent mobility concern. The platform capability of moving over...2 design, not run in a factorial manner. a. Goal: To establish a baseline for the height and type of stairs the SUGV is willing to climb . b...bricks to adjust heights of platforms)? (GDRS will handle this.) • Do we want to add something like bags of mulch as steps for the robot to climb or

  13. Robots, Disability, and Good Human Life

    Directory of Open Access Journals (Sweden)

    Antonio Carnevale

    2015-02-01

    Full Text Available In this paper, I want to show the role that emerging robotic technologies could play in the future in daily life of disabled people. When I talk about disability, I mean any temporary or permanent limitation due to a chronic disease and deficit, as well as, socially disadvantaged conditions, which imply functional and emotional restrictions experienced at any age. All these limitations can be characterized by a specific mental and physical impairment or, more often, by a cluster of medical impairments and social barriers. To this end, the academic literature has generally differentiated between two disability models: 'medical' versus 'social'. The main attempt of this paper consists into showing how the development of robotic technologies — particularly in assistive and healthcare fields — could allow us to go beyond this outdated dichotomy, contributing to create new philosophical premises to rethink the universality of the human condition, that is, the sense of what we intend for 'good human life'.

  14. Human exploration and settlement of Mars - The roles of humans and robots

    Science.gov (United States)

    Duke, Michael B.

    1991-01-01

    The scientific objectives and strategies for human settlement on Mars are examined in the context of the Space Exploration Initiative (SEI). An integrated strategy for humans and robots in the exploration and settlement of Mars is examined. Such an effort would feature robotic, telerobotic, and human-supervised robotic phases.

  15. The New Robotics-towards human-centered machines.

    Science.gov (United States)

    Schaal, Stefan

    2007-07-01

    Research in robotics has moved away from its primary focus on industrial applications. The New Robotics is a vision that has been developed in past years by our own university and many other national and international research institutions and addresses how increasingly more human-like robots can live among us and take over tasks where our current society has shortcomings. Elder care, physical therapy, child education, search and rescue, and general assistance in daily life situations are some of the examples that will benefit from the New Robotics in the near future. With these goals in mind, research for the New Robotics has to embrace a broad interdisciplinary approach, ranging from traditional mathematical issues of robotics to novel issues in psychology, neuroscience, and ethics. This paper outlines some of the important research problems that will need to be resolved to make the New Robotics a reality.

  16. HUMAN FOLLOWING ON ROS FRAMEWORK A MOBILE ROBOT

    Directory of Open Access Journals (Sweden)

    Gigih Priyandoko

    2018-06-01

    Full Text Available Service mobile robot is playing a more critical role in today's society as more people such as a disabled person or the elderly are in need of mobile robot assistance. An autonomous person following ability shows great importance to the overall role of service mobile robot in assisting human. The objective of this paper focuses on developing a robot follow a person. The robot is equipped with the necessary sensors such as a Microsoft Kinect sensor and a Hokuyo laser sensor. Four suitable tracking methods are introduced in this project which is implemented and tested on the person following algorithm. The tracking methods implemented are face detection, leg detection, color detection and person blob detection. All of the algorithms implementations in this project is performed using Robot Operating System (ROS. The result showed that the mobile robot could track and follow the target person based on the person movement.

  17. Ethorobotics: A New Approach to Human-Robot Relationship

    Directory of Open Access Journals (Sweden)

    Ádám Miklósi

    2017-06-01

    Full Text Available Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human behaviors: ecology and ethology. We show how the paradox of the so called “uncanny valley hypothesis” can be solved by applying the “niche” concept to social robots, and relying on the natural behavior of humans. Instead of striving to build human-like social robots, engineers should construct robots that are able to maximize their performance in their niche (being optimal for some specific functions, and if they are endowed with appropriate form of social competence then humans will eventually interact with them independent of their embodiment. This new discipline, which we call ethorobotics, could change social robotics, giving a boost to new technical approaches and applications.

  18. Ethorobotics: A New Approach to Human-Robot Relationship

    Science.gov (United States)

    Miklósi, Ádám; Korondi, Péter; Matellán, Vicente; Gácsi, Márta

    2017-01-01

    Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human behaviors: ecology and ethology. We show how the paradox of the so called “uncanny valley hypothesis” can be solved by applying the “niche” concept to social robots, and relying on the natural behavior of humans. Instead of striving to build human-like social robots, engineers should construct robots that are able to maximize their performance in their niche (being optimal for some specific functions), and if they are endowed with appropriate form of social competence then humans will eventually interact with them independent of their embodiment. This new discipline, which we call ethorobotics, could change social robotics, giving a boost to new technical approaches and applications. PMID:28649213

  19. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control

    Science.gov (United States)

    Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash

    2012-06-01

    This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.

  20. Human-like Compliance for Dexterous Robot Hands

    Science.gov (United States)

    Jau, Bruno M.

    1995-01-01

    This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications.

  1. Visual and tactile interfaces for bi-directional human robot communication

    Science.gov (United States)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  2. Anthropomorphism in Human-Robot Co-evolution.

    Science.gov (United States)

    Damiano, Luisa; Dumouchel, Paul

    2018-01-01

    Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents - social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots "social presence" and "social behaviors" that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of 'applied anthropomorphism' as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a "cheating" technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns "anthropomorphism-based" social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, "synthetic ethics," which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.

  3. Approaching human performance the functionality-driven Awiwi robot hand

    CERN Document Server

    Grebenstein, Markus

    2014-01-01

    Humanoid robotics have made remarkable progress since the dawn of robotics. So why don't we have humanoid robot assistants in day-to-day life yet? This book analyzes the keys to building a successful humanoid robot for field robotics, where collisions become an unavoidable part of the game. The author argues that the design goal should be real anthropomorphism, as opposed to mere human-like appearance. He deduces three major characteristics to aim for when designing a humanoid robot, particularly robot hands: _ Robustness against impacts _ Fast dynamics _ Human-like grasping and manipulation performance   Instead of blindly copying human anatomy, this book opts for a holistic design me-tho-do-lo-gy. It analyzes human hands and existing robot hands to elucidate the important functionalities that are the building blocks toward these necessary characteristics.They are the keys to designing an anthropomorphic robot hand, as illustrated in the high performance anthropomorphic Awiwi Hand presented in this book.  ...

  4. Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.

    Science.gov (United States)

    Hongbo Wang; Kosuge, K

    2012-01-01

    Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.

  5. Human-Robot Teaming: From Space Robotics to Self-Driving Cars

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    In this talk, I describe how NASA Ames has been developing and testing robots for space exploration. In our research, we have focused on studying how human-robot teams can increase the performance, reduce the cost, and increase the success of space missions. A key tenet of our work is that humans and robots should support one another in order to compensate for limitations of manual control and autonomy. This principle has broad applicability beyond space exploration. Thus, I will conclude by discussing how we have worked with Nissan to apply our methods to self-driving cars, enabling humans to support autonomous vehicles operating in unpredictable and difficult situations.

  6. Semiotics and Human-Robot Interaction

    OpenAIRE

    Sequeira, Joao; Ribeiro, M.Isabel

    2007-01-01

    The social barriers that still constrain the use of robots in modern societies will tend to vanish with the sophistication increase of interaction strategies. Communication and interaction between people and robots occurring in a friendly manner and being accessible to everyone, independent of their skills in robotics issues, will certainly foster the breaking of barriers. Socializing behaviors, such as following people, are relatively easy to obtain with current state of the art robotics. Ho...

  7. Human-Robot Interaction in High Vulnerability Domains

    Science.gov (United States)

    Gore, Brian F.

    2016-01-01

    Future NASA missions will require successful integration of the human with highly complex systems. Highly complex systems are likely to involve humans, automation, and some level of robotic assistance. The complex environments will require successful integration of the human with automation, with robots, and with human-automation-robot teams to accomplish mission critical goals. Many challenges exist for the human performing in these types of operational environments with these kinds of systems. Systems must be designed to optimally integrate various levels of inputs and outputs based on the roles and responsibilities of the human, the automation, and the robots; from direct manual control, shared human-robotic control, or no active human control (i.e. human supervisory control). It is assumed that the human will remain involved at some level. Technologies that vary based on contextual demands and on operator characteristics (workload, situation awareness) will be needed when the human integrates into these systems. Predictive models that estimate the impact of the technologies on the system performance and the on the human operator are also needed to meet the challenges associated with such future complex human-automation-robot systems in extreme environments.

  8. Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    Science.gov (United States)

    Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer

    2011-01-01

    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator

  9. Task Refinement for Autonomous Robots using Complementary Corrective Human Feedback

    Directory of Open Access Journals (Sweden)

    Cetin Mericli

    2011-06-01

    Full Text Available A robot can perform a given task through a policy that maps its sensed state to appropriate actions. We assume that a hand-coded controller can achieve such a mapping only for the basic cases of the task. Refining the controller becomes harder and gets more tedious and error prone as the complexity of the task increases. In this paper, we present a new learning from demonstration approach to improve the robot's performance through the use of corrective human feedback as a complement to an existing hand-coded algorithm. The human teacher observes the robot as it performs the task using the hand-coded algorithm and takes over the control to correct the behavior when the robot selects a wrong action to be executed. Corrections are captured as new state-action pairs and the default controller output is replaced by the demonstrated corrections during autonomous execution when the current state of the robot is decided to be similar to a previously corrected state in the correction database. The proposed approach is applied to a complex ball dribbling task performed against stationary defender robots in a robot soccer scenario, where physical Aldebaran Nao humanoid robots are used. The results of our experiments show an improvement in the robot's performance when the default hand-coded controller is augmented with corrective human demonstration.

  10. Cognitive neuroscience robotics A synthetic approaches to human understanding

    CERN Document Server

    Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi

    2016-01-01

    Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume A describes how human cognitive functions can be replicated in artificial systems such as robots, and investigates how artificial systems could acquire intelligent behaviors through interaction with others and their environment.

  11. Applications of artificial intelligence in safe human-robot interactions.

    Science.gov (United States)

    Najmaei, Nima; Kermani, Mehrdad R

    2011-04-01

    The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.

  12. Transferring human impedance regulation skills to robots

    CERN Document Server

    Ajoudani, Arash

    2016-01-01

    This book introduces novel thinking and techniques to the control of robotic manipulation. In particular, the concept of teleimpedance control as an alternative method to bilateral force-reflecting teleoperation control for robotic manipulation is introduced. In teleimpedance control, a compound reference command is sent to the slave robot including both the desired motion trajectory and impedance profile, which are then realized by the remote controller. This concept forms a basis for the development of the controllers for a robotic arm, a dual-arm setup, a synergy-driven robotic hand, and a compliant exoskeleton for improved interaction performance.

  13. Influence of facial feedback during a cooperative human-robot task in schizophrenia.

    Science.gov (United States)

    Cohen, Laura; Khoramshahi, Mahdi; Salesse, Robin N; Bortolon, Catherine; Słowiński, Piotr; Zhai, Chao; Tsaneva-Atanasova, Krasimira; Di Bernardo, Mario; Capdevielle, Delphine; Marin, Ludovic; Schmidt, Richard C; Bardy, Benoit G; Billard, Aude; Raffard, Stéphane

    2017-11-03

    Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.

  14. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Science.gov (United States)

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  15. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Directory of Open Access Journals (Sweden)

    Joachim de Greeff

    Full Text Available Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference; the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  16. Toward a framework for levels of robot autonomy in human-robot interaction.

    Science.gov (United States)

    Beer, Jenay M; Fisk, Arthur D; Rogers, Wendy A

    2014-07-01

    A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomous systems, influence the way in which humans and robots may interact with one another. Thus, there is a need to understand HRI by identifying variables that influence - and are influenced by - robot autonomy. Our overarching goal is to develop a framework for levels of robot autonomy in HRI. To reach this goal, the framework draws links between HRI and human-automation interaction, a field with a long history of studying and understanding human-related variables. The construct of autonomy is reviewed and redefined within the context of HRI. Additionally, the framework proposes a process for determining a robot's autonomy level, by categorizing autonomy along a 10-point taxonomy. The framework is intended to be treated as guidelines to determine autonomy, categorize the LORA along a qualitative taxonomy, and consider which HRI variables (e.g., acceptance, situation awareness, reliability) may be influenced by the LORA.

  17. Measuring empathy for human and robot hand pain using electroencephalography.

    Science.gov (United States)

    Suzuki, Yutaka; Galli, Lisa; Ikeda, Ayaka; Itakura, Shoji; Kitazaki, Michiteru

    2015-11-03

    This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.

  18. Optimal Modality Selection for Cooperative Human-Robot Task Completion.

    Science.gov (United States)

    Jacob, Mithun George; Wachs, Juan P

    2016-12-01

    Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p human-robot collision) and the differences in the lexicons are analyzed.

  19. Toward understanding social cues and signals in human?robot interaction: effects of robot gaze and proxemic behavior

    OpenAIRE

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relatio...

  20. Trajectory Planning for Robots in Dynamic Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Bak, Thomas; Andersen, Hans Jørgen

    2010-01-01

    This paper present a trajectory planning algorithm for a robot operating in dynamic human environments. Environments such as pedestrian streets, hospital corridors and train stations. We formulate the problem as planning a minimal cost trajectory through a potential field, defined from...... is enhanced to direct the search and account for the kinodynamic robot constraints. Compared to standard RRT, the algorithm proposed here find the robot control input that will drive the robot towards a new sampled point in the configuration space. The effect of the input is simulated, to add a reachable...

  1. Simplified Human-Robot Interaction: Modeling and Evaluation

    Directory of Open Access Journals (Sweden)

    Balazs Daniel

    2013-10-01

    Full Text Available In this paper a novel concept of human-robot interaction (HRI modeling is proposed. Including factors like trust in automation, situational awareness, expertise and expectations a new user experience framework is formed for industrial robots. Service Oriented Robot Operation, proposed in a previous paper, creates an abstract level in HRI and it is also included in the framework. This concept is evaluated with exhaustive tests. Results prove that significant improvement in task execution may be achieved and the new system is more usable for operators with less experience with robotics; personnel specific for small and medium enterprises (SMEs.

  2. Warning Signals for Poor Performance Improve Human-Robot Interaction

    NARCIS (Netherlands)

    van den Brule, Rik; Bijlstra, Gijsbert; Dotsch, Ron; Haselager, Pim; Wigboldus, Daniel HJ

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot’s nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the

  3. Depth camera driven mobile robot for human localization and following

    DEFF Research Database (Denmark)

    Skordilis, Nikolaos; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2014-01-01

    In this paper the design and the development of a mobile robot able to locate and then follow a human target is described. Both the integration of the required mechatronics components and the development of appropriate software are covered. The main sensor of the developed mobile robot is an RGB-...

  4. Socially intelligent robots that understand and respond to human touch

    NARCIS (Netherlands)

    Jung, Merel Madeleine

    Touch is an important nonverbal form of interpersonal interaction which is used to communicate emotions and other social messages. As interactions with social robots are likely to become more common in the near future these robots should also be able to engage in tactile interaction with humans.

  5. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots.

    Science.gov (United States)

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.

  6. Human-Robot Site Survey and Sampling for Space Exploration

    Science.gov (United States)

    Fong, Terrence; Bualat, Maria; Edwards, Laurence; Flueckiger, Lorenzo; Kunz, Clayton; Lee, Susan Y.; Park, Eric; To, Vinh; Utz, Hans; Ackner, Nir

    2006-01-01

    NASA is planning to send humans and robots back to the Moon before 2020. In order for extended missions to be productive, high quality maps of lunar terrain and resources are required. Although orbital images can provide much information, many features (local topography, resources, etc) will have to be characterized directly on the surface. To address this need, we are developing a system to perform site survey and sampling. The system includes multiple robots and humans operating in a variety of team configurations, coordinated via peer-to-peer human-robot interaction. In this paper, we present our system design and describe planned field tests.

  7. Robot and Human Surface Operations on Solar System Bodies

    Science.gov (United States)

    Weisbin, C. R.; Easter, R.; Rodriguez, G.

    2001-01-01

    This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.

  8. Human-robot interaction assessment using dynamic engagement profiles

    DEFF Research Database (Denmark)

    Drimus, Alin; Poltorak, Nicole

    2017-01-01

    -1] interval, where 0 represents disengaged and 1 fully engaged. The network shows a good accuracy at recognizing the engagement state of humans given positive emotions. A time based analysis of interaction experiments between small humanoid robots and humans provides time series of engagement estimates, which...... and is applicable to humanoid robotics as well as other related contexts.......This paper addresses the use of convolutional neural networks for image analysis resulting in an engagement metric that can be used to assess the quality of human robot interactions. We propose a method based on a pretrained convolutional network able to map emotions onto a continuous [0...

  9. Cognitive Tools for Humanoid Robots in Space

    National Research Council Canada - National Science Library

    Sofge, Donald; Perzanowski, Dennis; Skubic, Marjorie; Bugajska, Magdalena; Trafton, J. G; Cassimatis, Nicholas; Brock, Derek; Adams, William; Schultz, Alan

    2004-01-01

    ...) to collaborate with a human. The capabilities required of the robot include voice recognition, natural language understanding, gesture recognition, spatial reasoning, and cognitive modeling with perspective-taking...

  10. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    Science.gov (United States)

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  11. Human-Automation Allocations for Current Robotic Space Operations

    Science.gov (United States)

    Marquez, Jessica J.; Chang, Mai L.; Beard, Bettina L.; Kim, Yun Kyung; Karasinski, John A.

    2018-01-01

    Within the Human Research Program, one risk delineates the uncertainty surrounding crew working with automation and robotics in spaceflight. The Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is concerned with the detrimental effects on crew performance due to ineffective user interfaces, system designs and/or functional task allocation, potentially compromising mission success and safety. Risk arises because we have limited experience with complex automation and robotics. One key gap within HARI, is the gap related to functional allocation. The gap states: We need to evaluate, develop, and validate methods and guidelines for identifying human-automation/robot task information needs, function allocation, and team composition for future long duration, long distance space missions. Allocations determine the human-system performance as it identifies the functions and performance levels required by the automation/robotic system, and in turn, what work the crew is expected to perform and the necessary human performance requirements. Allocations must take into account each of the human, automation, and robotic systems capabilities and limitations. Some functions may be intuitively assigned to the human versus the robot, but to optimize efficiency and effectiveness, purposeful role assignments will be required. The role of automation and robotics will significantly change in future exploration missions, particularly as crew becomes more autonomous from ground controllers. Thus, we must understand the suitability of existing function allocation methods within NASA as well as the existing allocations established by the few robotic systems that are operational in spaceflight. In order to evaluate future methods of robotic allocations, we must first benchmark the allocations and allocation methods that have been used. We will present 1) documentation of human-automation-robotic allocations in existing, operational spaceflight systems; and 2) To

  12. Robot Tracking of Human Subjects in Field Environments

    Science.gov (United States)

    Graham, Jeffrey; Shillcutt, Kimberly

    2003-01-01

    Future planetary exploration will involve both humans and robots. Understanding and improving their interaction is a main focus of research in the Intelligent Systems Branch at NASA's Johnson Space Center. By teaming intelligent robots with astronauts on surface extra-vehicular activities (EVAs), safety and productivity can be improved. The EVA Robotic Assistant (ERA) project was established to study the issues of human-robot teams, to develop a testbed robot to assist space-suited humans in exploration tasks, and to experimentally determine the effectiveness of an EVA assistant robot. A companion paper discusses the ERA project in general, its history starting with ASRO (Astronaut-Rover project), and the results of recent field tests in Arizona. This paper focuses on one aspect of the research, robot tracking, in greater detail: the software architecture and algorithms. The ERA robot is capable of moving towards and/or continuously following mobile or stationary targets or sequences of targets. The contributions made by this research include how the low-level pose data is assembled, normalized and communicated, how the tracking algorithm was generalized and implemented, and qualitative performance reports from recent field tests.

  13. Collaborative Assistive Robot for Mobility Enhancement (CARMEN) The bare necessities assisted wheelchair navigation and beyond

    CERN Document Server

    Urdiales, Cristina

    2012-01-01

    In nowadays aging society, many people require mobility assistance. Sometimes, assistive devices need a certain degree of autonomy when users' disabilities difficult manual control. However, clinicians report that excessive assistance may lead to loss of residual skills and frustration. Shared control focuses on deciding when users need help and providing it. Collaborative control aims at giving just the right amount of help in a transparent, seamless way. This book presents the collaborative control paradigm. User performance may be indicative of physical/cognitive condition, so it is used to decide how much help is needed. Besides, collaborative control integrates machine and user commands so that people contribute to self-motion at all times. Collaborative control was extensively tested for 3 years using a robotized wheelchair at a rehabilitation hospital in Rome with volunteer inpatients presenting different disabilities, ranging from mild to severe. We also present a taxonomy of common metrics for wheelc...

  14. Human Robotic Systems (HRS): Robonaut 2 Technologies Element

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the Robonaut 2 (R2) Technology Project Element within Human Robotic Systems (HRS) is to developed advanced technologies for infusion into the Robonaut 2...

  15. A new method to evaluate human-robot system performance

    Science.gov (United States)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  16. Cognitive neuroscience robotics B analytic approaches to human understanding

    CERN Document Server

    Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi

    2016-01-01

    Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume B describes to what extent cognitive science and neuroscience have revealed the underlying mechanism of human cognition, and investigates how development of neural engineering and advances in other disciplines could lead to deep understanding of human cognition.

  17. Affect in Human-Robot Interaction

    Science.gov (United States)

    2014-01-01

    Werry, I., Rae, J., Dickerson, P., Stribling, P., & Ogden, B. (2002). Robotic Playmates: Analysing Interactive Competencies of Children with Autism ...WE-4RII. IEEE International Conference on Intelligent Robots and Systems, Edmonton, Canada. 35. Moravec, H. (1988). Mind Children : The Future of...and if so when and where? • What approaches, theories , representations, and experimental methods inform affective HRI research? Report Documentation

  18. Moral Appearances: Emotions, Robots, and Human Morality.

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2010-01-01

    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots

  19. Human-like robots for space and hazardous environments

    Science.gov (United States)

    1994-01-01

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  20. An Integrated Human System Interaction (HSI) Framework for Human-Agent Team Collaboration, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — As space missions become more complex and as mission demands increase, robots, human-robot mixed initiative teams and software autonomy applications are needed to...

  1. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction

    Science.gov (United States)

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children’s social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a “mental model” of the robot, tailoring the tutoring to the robot’s performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot’s bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance. PMID:26422143

  2. A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

    Directory of Open Access Journals (Sweden)

    Juan A. Corrales

    2011-10-01

    Full Text Available Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach.

  3. Presence of Life-Like Robot Expressions Influences Children’s Enjoyment of Human-Robot Interactions in the Field

    NARCIS (Netherlands)

    Cameron, David; Fernando, Samuel; Collins, Emily; Millings, Abigail; Moore, Roger; Sharkey, Amanda; Evers, Vanessa; Prescott, Tony

    Emotions, and emotional expression, have a broad influence on the interactions we have with others and are thus a key factor to consider in developing social robots. As part of a collaborative EU project, this study examined the impact of lifelike affective facial expressions, in the humanoid robot

  4. Human-Like Room Segmentation for Domestic Cleaning Robots

    Directory of Open Access Journals (Sweden)

    David Fleer

    2017-11-01

    Full Text Available Autonomous mobile robots have recently become a popular solution for automating cleaning tasks. In one application, the robot cleans a floor space by traversing and covering it completely. While fulfilling its task, such a robot may create a map of its surroundings. For domestic indoor environments, these maps often consist of rooms connected by passageways. Segmenting the map into these rooms has several uses, such as hierarchical planning of cleaning runs by the robot, or the definition of cleaning plans by the user. Especially in the latter application, the robot-generated room segmentation should match the human understanding of rooms. Here, we present a novel method that solves this problem for the graph of a topo-metric map: first, a classifier identifies those graph edges that cross a border between rooms. This classifier utilizes data from multiple robot sensors, such as obstacle measurements and camera images. Next, we attempt to segment the map at these room–border edges using graph clustering. By training the classifier on user-annotated data, this produces a human-like room segmentation. We optimize and test our method on numerous realistic maps generated by our cleaning-robot prototype and its simulated version. Overall, we find that our method produces more human-like room segmentations compared to mere graph clustering. However, unusual room borders that differ from the training data remain a challenge.

  5. An Exoskeleton Robot for Human Forearm and Wrist Motion Assist

    Science.gov (United States)

    Ranathunga Arachchilage Ruwan Chandra Gopura; Kiguchi, Kazuo

    The exoskeleton robot is worn by the human operator as an orthotic device. Its joints and links correspond to those of the human body. The same system operated in different modes can be used for different fundamental applications; a human-amplifier, haptic interface, rehabilitation device and assistive device sharing a portion of the external load with the operator. We have been developing exoskeleton robots for assisting the motion of physically weak individuals such as elderly or slightly disabled in daily life. In this paper, we propose a three degree of freedom (3DOF) exoskeleton robot (W-EXOS) for the forearm pronation/ supination motion, wrist flexion/extension motion and ulnar/radial deviation. The paper describes the wrist anatomy toward the development of the exoskeleton robot, the hardware design of the exoskeleton robot and EMG-based control method. The skin surface electromyographic (EMG) signals of muscles in forearm of the exoskeletons' user and the hand force/forearm torque are used as input information for the controller. By applying the skin surface EMG signals as main input signals to the controller, automatic control of the robot can be realized without manipulating any other equipment. Fuzzy control method has been applied to realize the natural and flexible motion assist. Experiments have been performed to evaluate the proposed exoskeleton robot and its control method.

  6. Robot assistant versus human or another robot assistant in patients undergoing laparoscopic cholecystectomy.

    Science.gov (United States)

    Gurusamy, Kurinchi Selvan; Samraj, Kumarakrishnan; Fusai, Giuseppe; Davidson, Brian R

    2012-09-12

    The role of a robotic assistant in laparoscopic cholecystectomy is controversial. While some trials have shown distinct advantages of a robotic assistant over a human assistant others have not, and it is unclear which robotic assistant is best. The aims of this review are to assess the benefits and harms of a robot assistant versus human assistant or versus another robot assistant in laparoscopic cholecystectomy, and to assess whether the robot can substitute the human assistant. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE, and Science Citation Index Expanded (until February 2012) for identifying the randomised clinical trials. Only randomised clinical trials (irrespective of language, blinding, or publication status) comparing robot assistants versus human assistants in laparoscopic cholecystectomy were considered for the review. Randomised clinical trials comparing different types of robot assistants were also considered for the review. Two authors independently identified the trials for inclusion and independently extracted the data. We calculated the risk ratio (RR) or mean difference (MD) with 95% confidence interval (CI) using the fixed-effect and the random-effects models based on intention-to-treat analysis, when possible, using Review Manager 5. We included six trials with 560 patients. One trial involving 129 patients did not state the number of patients randomised to the two groups. In the remaining five trials 431 patients were randomised, 212 to the robot assistant group and 219 to the human assistant group. All the trials were at high risk of bias. Mortality and morbidity were reported in only one trial with 40 patients. There was no mortality or morbidity in either group. Mortality and morbidity were not reported in the remaining trials. Quality of life or the proportion of patients who were discharged as day-patient laparoscopic cholecystectomy patients were not reported in any

  7. Collaborative gaming and competition for CS-STEM education using SPHERES Zero Robotics

    Science.gov (United States)

    Nag, Sreeja; Katz, Jacob G.; Saenz-Otero, Alvar

    2013-02-01

    There is widespread investment of resources in the fields of Computer Science, Science, Technology, Engineering, Mathematics (CS-STEM) education to improve STEM interests and skills. This paper addresses the goal of revolutionizing student education using collaborative gaming and competition, both in virtual simulation environments and on real hardware in space. The concept is demonstrated using the SPHERES Zero Robotics (ZR) Program which is a robotics programming competition. The robots are miniature satellites called SPHERES—an experimental test bed developed by the MIT SSL on the International Space Station (ISS) to test navigation, formation flight and control algorithms in microgravity. The participants compete to win a technically challenging game by programming their strategies into the SPHERES satellites, completely from a web browser. The programs are demonstrated in simulation, on ground hardware and then in a final competition when an astronaut runs the student software aboard the ISS. ZR had a pilot event in 2009 with 10 High School (HS) students, a nationwide pilot tournament in 2010 with over 200 HS students from 19 US states, a summer tournament in 2010 with ˜150 middle school students and an open-registration tournament in 2011 with over 1000 HS students from USA and Europe. The influence of collaboration was investigated by (1) building new web infrastructure and an Integrated Development Environment where intensive inter-participant collaboration is possible, (2) designing and programming a game to solve a relevant formation flight problem, collaborative in nature—and (3) structuring a tournament such that inter-team collaboration is mandated. This paper introduces the ZR web tools, assesses the educational value delivered by the program using space and games and evaluates the utility of collaborative gaming within this framework. There were three types of collaborations as variables—within matches (to achieve game objectives), inter

  8. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    Directory of Open Access Journals (Sweden)

    Felipe Cid

    2014-04-01

    Full Text Available This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System, the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.

  9. Mobile app for human-interaction with sitter robots

    Science.gov (United States)

    Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.

    2017-05-01

    Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that

  10. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  11. Abstract robots with an attitude : applying interpersonal relation models to human-robot interaction

    NARCIS (Netherlands)

    Hiah, J.L.; Beursgens, L.; Haex, R.; Perez Romero, L.M.; Teh, Y.; Bhomer, ten M.; Berkel, van R.E.A.; Barakova, E.I.

    2013-01-01

    This paper explores new possibilities for social interaction between a human user and a robot with an abstract shape. The social interaction takes place by simulating behaviors such as submissiveness and dominance and analyzing the corresponding human reactions. We used an object that has no

  12. Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human–Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Alireza Haji Fathaliyan

    2018-04-01

    Full Text Available Human–robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support human goals. Traditionally, gaze tracking approaches to action recognition have relied upon computer vision-based analyses of two-dimensional egocentric camera videos. The objective of this study was to identify useful features that can be extracted from three-dimensional (3D gaze behavior and used as inputs to machine learning algorithms for human action recognition. We investigated human gaze behavior and gaze–object interactions in 3D during the performance of a bimanual, instrumental activity of daily living: the preparation of a powdered drink. A marker-based motion capture system and binocular eye tracker were used to reconstruct 3D gaze vectors and their intersection with 3D point clouds of objects being manipulated. Statistical analyses of gaze fixation duration and saccade size suggested that some actions (pouring and stirring may require more visual attention than other actions (reach, pick up, set down, and move. 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information. The “gaze object sequence” was used to capture information about the identity of objects in concert with the temporal sequence in which the objects were visually regarded. Dynamic time warping barycentric averaging was used to create a population-based set of characteristic gaze object sequences that accounted for intra- and inter-subject variability. The gaze object sequence was used to demonstrate the feasibility of a simple action recognition algorithm that utilized a dynamic time warping Euclidean distance metric. Averaged over the six subtasks, the action recognition algorithm yielded an accuracy of 96.4%, precision of 89.5%, and recall of 89.2%. This level of performance suggests that

  13. A human-oriented framework for developing assistive service robots.

    Science.gov (United States)

    McGinn, Conor; Cullinan, Michael F; Culleton, Mark; Kelly, Kevin

    2018-04-01

    Multipurpose robots that can perform a range of useful tasks have the potential to increase the quality of life for many people living with disabilities. Owing to factors such as high system complexity, as-yet unresolved research questions and current technology limitations, there is a need for effective strategies to coordinate the development process. Integrating established methodologies based on human-centred design and universal design, a framework was formulated to coordinate the robot design process over successive iterations of prototype development. An account is given of how the framework was practically applied to the problem of developing a personal service robot. Application of the framework led to the formation of several design goals which addressed a wide range of identified user needs. The resultant prototype solution, which consisted of several component elements, succeeded in demonstrating the performance stipulated by all of the proposed metrics. Application of the framework resulted in the development of a complex prototype that addressed many aspects of the functional and usability requirements of a personal service robot. Following the process led to several important insights which directly benefit the development of subsequent prototypes. Implications for Rehabilitation This research shows how universal design might be used to formulate usability requirements for assistive service robots. A framework is presented that guides the process of designing service robots in a human-centred way. Through practical application of the framework, a prototype robot system that addressed a range of identified user needs was developed.

  14. A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  15. Extending NGOMSL Model for Human-Humanoid Robot Interaction in the Soccer Robotics Domain

    Directory of Open Access Journals (Sweden)

    Rajesh Elara Mohan

    2008-01-01

    Full Text Available In the field of human-computer interaction, the Natural Goals, Operators, Methods, and Selection rules Language (NGOMSL model is one of the most popular methods for modelling knowledge and cognitive processes for rapid usability evaluation. The NGOMSL model is a description of the knowledge that a user must possess to operate the system represented as elementary actions for effective usability evaluations. In the last few years, mobile robots have been exhibiting a stronger presence in commercial markets and very little work has been done with NGOMSL modelling for usability evaluations in the human-robot interaction discipline. This paper focuses on extending the NGOMSL model for usability evaluation of human-humanoid robot interaction in the soccer robotics domain. The NGOMSL modelled human-humanoid interaction design of Robo-Erectus Junior was evaluated and the results of the experiments showed that the interaction design was able to find faults in an average time of 23.84 s. Also, the interaction design was able to detect the fault within the 60 s in 100% of the cases. The Evaluated Interaction design was adopted by our Robo-Erectus Junior version of humanoid robots in the RoboCup 2007 humanoid soccer league.

  16. Human guidance of mobile robots in complex 3D environments using smart glasses

    Science.gov (United States)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  17. Interaction Challenges in Human-Robot Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2005-01-01

    In January 2004, NASA established a new, long-term exploration program to fulfill the President's Vision for U.S. Space Exploration. The primary goal of this program is to establish a sustained human presence in space, beginning with robotic missions to the Moon in 2008, followed by extended human expeditions to the Moon as early as 2015. In addition, the program places significant emphasis on the development of joint human-robot systems. A key difference from previous exploration efforts is that future space exploration activities must be sustainable over the long-term. Experience with the space station has shown that cost pressures will keep astronaut teams small. Consequently, care must be taken to extend the effectiveness of these astronauts well beyond their individual human capacity. Thus, in order to reduce human workload, costs, and fatigue-driven error and risk, intelligent robots will have to be an integral part of mission design.

  18. Preliminary Framework for Human-Automation Collaboration

    International Nuclear Information System (INIS)

    Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Spielman, Zachary Alexander

    2015-01-01

    The Department of Energy's Advanced Reactor Technologies Program sponsors research, development and deployment activities through its Next Generation Nuclear Plant, Advanced Reactor Concepts, and Advanced Small Modular Reactor (aSMR) Programs to promote safety, technical, economical, and environmental advancements of innovative Generation IV nuclear energy technologies. The Human Automation Collaboration (HAC) Research Project is located under the aSMR Program, which identifies developing advanced instrumentation and controls and human-machine interfaces as one of four key research areas. It is expected that the new nuclear power plant designs will employ technology significantly more advanced than the analog systems in the existing reactor fleet as well as utilizing automation to a greater extent. Moving towards more advanced technology and more automation does not necessary imply more efficient and safer operation of the plant. Instead, a number of concerns about how these technologies will affect human performance and the overall safety of the plant need to be addressed. More specifically, it is important to investigate how the operator and the automation work as a team to ensure effective and safe plant operation, also known as the human-automation collaboration (HAC). The focus of the HAC research is to understand how various characteristics of automation (such as its reliability, processes, and modes) effect an operator's use and awareness of plant conditions. In other words, the research team investigates how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. This report addresses the Department of Energy milestone M4AT-15IN2302054, Complete Preliminary Framework for Human-Automation Collaboration, by discussing the two phased development of a preliminary HAC framework. The framework developed in the first phase was used as

  19. Preliminary Framework for Human-Automation Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spielman, Zachary Alexander [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The Department of Energy’s Advanced Reactor Technologies Program sponsors research, development and deployment activities through its Next Generation Nuclear Plant, Advanced Reactor Concepts, and Advanced Small Modular Reactor (aSMR) Programs to promote safety, technical, economical, and environmental advancements of innovative Generation IV nuclear energy technologies. The Human Automation Collaboration (HAC) Research Project is located under the aSMR Program, which identifies developing advanced instrumentation and controls and human-machine interfaces as one of four key research areas. It is expected that the new nuclear power plant designs will employ technology significantly more advanced than the analog systems in the existing reactor fleet as well as utilizing automation to a greater extent. Moving towards more advanced technology and more automation does not necessary imply more efficient and safer operation of the plant. Instead, a number of concerns about how these technologies will affect human performance and the overall safety of the plant need to be addressed. More specifically, it is important to investigate how the operator and the automation work as a team to ensure effective and safe plant operation, also known as the human-automation collaboration (HAC). The focus of the HAC research is to understand how various characteristics of automation (such as its reliability, processes, and modes) effect an operator’s use and awareness of plant conditions. In other words, the research team investigates how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. This report addresses the Department of Energy milestone M4AT-15IN2302054, Complete Preliminary Framework for Human-Automation Collaboration, by discussing the two phased development of a preliminary HAC framework. The framework developed in the first phase was used as the

  20. The Relationship between Robot's Nonverbal Behaviour and Human's Likability Based on Human's Personality.

    Science.gov (United States)

    Thepsoonthorn, Chidchanok; Ogawa, Ken-Ichiro; Miyake, Yoshihiro

    2018-05-30

    At current state, although robotics technology has been immensely developed, the uncertainty to completely engage in human-robot interaction is still growing among people. Many current studies then started to concern about human factors that might influence human's likability like human's personality, and found that compatibility between human's and robot's personality (expressions of personality characteristics) can enhance human's likability. However, it is still unclear whether specific means and strategy of robot's nonverbal behaviours enhances likability from human with different personality traits and whether there is a relationship between robot's nonverbal behaviours and human's likability based on human's personality. In this study, we investigated and focused on the interaction via gaze and head nodding behaviours (mutual gaze convergence and head nodding synchrony) between introvert/extravert participants and robot in two communication strategies (Backchanneling and Turn-taking). Our findings reveal that the introvert participants are positively affected by backchanneling in robot's head nodding behaviour, which results in substantial head nodding synchrony whereas the extravert participants are positively influenced by turn-taking in gaze behaviour, which leads to significant mutual gaze convergence. This study demonstrates that there is a relationship between robot's nonverbal behaviour and human's likability based on human's personality.

  1. A Human-Robot Co-Manipulation Approach Based on Human Sensorimotor Information.

    Science.gov (United States)

    Peternel, Luka; Tsagarakis, Nikos; Ajoudani, Arash

    2017-07-01

    This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human-robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human-robot interaction performance and deliver appropriate level of assistance to the human operator.

  2. Robotics

    Science.gov (United States)

    Popov, E. P.; Iurevich, E. I.

    The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.

  3. A Social Cognitive Neuroscience Stance on Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Chaminade Thierry

    2011-12-01

    Full Text Available Robotic devices, thanks to the controlled variations in their appearance and behaviors, provide useful tools to test hypotheses pertaining to social interactions. These agents were used to investigate one theoretical framework, resonance, which is defined, at the behavioral and neural levels, as an overlap between first- and third- person representations of mental states such as motor intentions or emotions. Behaviorally, we found a reduced, but significant, resonance towards a humanoid robot displaying biological motion, compared to a human. Using neuroimaging, we've reported that while perceptual processes in the human occipital and temporal lobe are more strongly engaged when perceiving a humanoid robot than a human action, activity in areas involved in motor resonance depends on attentional modulation for artificial agent more strongly than for human agents. Altogether, these studies using artificial agents offer valuable insights into the interaction of bottom-up and top-down processes in the perception of artificial agents.

  4. Human-like robots as platforms for electroactive polymers (EAP)

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2008-03-01

    Human-like robots, which have been a science fiction for many years, are increasingly becoming an engineering reality thanks to many technology advances in recent years. Humans have always sought to imitate the human appearance, functions and intelligence and as the capability progresses they may become our household appliance or even companion. Biomimetic technologies are increasingly becoming common tools to support the development of such robots. As artificial muscles, electroactive polymers (EAP) are offering important actuation capability for making such machines lifelike. The current limitations of EAP are hampering the possibilities that can be adapted in such robots but progress is continually being made. As opposed to other human made machines and devices, this technology raises various questions and concerns that need to be addressed. These include the need to prevent accidents, deliberate harm, or their use in crimes. In this paper the state-of-the-art and the challenges will be reviewed.

  5. Human-robot skills transfer interfaces for a flexible surgical robot.

    Science.gov (United States)

    Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G

    2014-09-01

    In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Integration of a Skill-based Collaborative Mobile Robot in a Smart Cyber-Physical Environment

    DEFF Research Database (Denmark)

    Andersen, Rasmus Eckholdt; Hansen, Emil Blixt; Cerny, David

    2017-01-01

    The goal of this paper is to investigate the benefits of integrating collaborative robotic manipulators with autonomous mobile platforms for flexible part feeding processes in an Industry 4.0 production facility. The paper presents Little Helper 6 (LH6), consisting of a MiR100, UR5, a Robotiq 3......-Finger Gripper and a task level software framework, called Skill Based System (SBS). The preliminary experiments performed with LH6, demonstrate that the capabilities of skill-based programming, 3D QR based calibration, part feeding, mapping and dynamic collision avoidance are successfully executed...

  7. A Taxonomy of Human-Agent Team Collaborations

    NARCIS (Netherlands)

    Neef, R.M.

    2006-01-01

    Future command teams will be heavily supported by artificial actors. This paper introduces a taxonomy of collaboration types in human – agent teams. Using two classifying dimensions, coordination type and collaboration type, eight different classes of human – agent collaborations transpire. These

  8. Advancing the Strategic Messages Affecting Robot Trust Effect: The Dynamic of User- and Robot-Generated Content on Human-Robot Trust and Interaction Outcomes.

    Science.gov (United States)

    Liang, Yuhua Jake; Lee, Seungcheol Austin

    2016-09-01

    Human-robot interaction (HRI) will soon transform and shift the communication landscape such that people exchange messages with robots. However, successful HRI requires people to trust robots, and, in turn, the trust affects the interaction. Although prior research has examined the determinants of human-robot trust (HRT) during HRI, no research has examined the messages that people received before interacting with robots and their effect on HRT. We conceptualize these messages as SMART (Strategic Messages Affecting Robot Trust). Moreover, we posit that SMART can ultimately affect actual HRI outcomes (i.e., robot evaluations, robot credibility, participant mood) by affording the persuasive influences from user-generated content (UGC) on participatory Web sites. In Study 1, participants were assigned to one of two conditions (UGC/control) in an original experiment of HRT. Compared with the control (descriptive information only), results showed that UGC moderated the correlation between HRT and interaction outcomes in a positive direction (average Δr = +0.39) for robots as media and robots as tools. In Study 2, we explored the effect of robot-generated content but did not find similar moderation effects. These findings point to an important empirical potential to employ SMART in future robot deployment.

  9. Physical human-robot interaction of an active pelvis orthosis: toward ergonomic assessment of wearable robots.

    Science.gov (United States)

    d'Elia, Nicolò; Vanetti, Federica; Cempini, Marco; Pasquini, Guido; Parri, Andrea; Rabuffetti, Marco; Ferrarin, Maurizio; Molino Lova, Raffaele; Vitiello, Nicola

    2017-04-14

    In human-centered robotics, exoskeletons are becoming relevant for addressing needs in the healthcare and industrial domains. Owing to their close interaction with the user, the safety and ergonomics of these systems are critical design features that require systematic evaluation methodologies. Proper transfer of mechanical power requires optimal tuning of the kinematic coupling between the robotic and anatomical joint rotation axes. We present the methods and results of an experimental evaluation of the physical interaction with an active pelvis orthosis (APO). This device was designed to effectively assist in hip flexion-extension during locomotion with a minimum impact on the physiological human kinematics, owing to a set of passive degrees of freedom for self-alignment of the human and robotic hip flexion-extension axes. Five healthy volunteers walked on a treadmill at different speeds without and with the APO under different levels of assistance. The user-APO physical interaction was evaluated in terms of: (i) the deviation of human lower-limb joint kinematics when wearing the APO with respect to the physiological behavior (i.e., without the APO); (ii) relative displacements between the APO orthotic shells and the corresponding body segments; and (iii) the discrepancy between the kinematics of the APO and the wearer's hip joints. The results show: (i) negligible interference of the APO in human kinematics under all the experimented conditions; (ii) small (i.e., ergonomics assessment of wearable robots.

  10. Turn-taking cue delays in human-robot communication

    NARCIS (Netherlands)

    Cuijpers, R. H.; Van Den Goor, V. J.P.

    2017-01-01

    Fluent communication between a human and a robot relies on the use of effective turn-taking cues. In human speech staying silent after a sequence of utterances is usually accompanied by an explicit turnyielding cue to signal the end of a turn. Here we study the effect of the timing of four

  11. Towards understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior

    Directory of Open Access Journals (Sweden)

    Stephen M. Fiore

    2013-11-01

    Full Text Available As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI. We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  12. Robotic situational awareness of actions in human teaming

    Science.gov (United States)

    Tahmoush, Dave

    2015-06-01

    When robots can sense and interpret the activities of the people they are working with, they become more of a team member and less of just a piece of equipment. This has motivated work on recognizing human actions using existing robotic sensors like short-range ladar imagers. These produce three-dimensional point cloud movies which can be analyzed for structure and motion information. We skeletonize the human point cloud and apply a physics-based velocity correlation scheme to the resulting joint motions. The twenty actions are then recognized using a nearest-neighbors classifier that achieves good accuracy.

  13. Investigation of human-robot interface performance in household environments

    Science.gov (United States)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  14. Utilization of Human-Like Pelvic Rotation for Running Robot

    Directory of Open Access Journals (Sweden)

    Takuya eOtani

    2015-07-01

    Full Text Available The spring loaded inverted pendulum (SLIP is used to model human running. It is based on a characteristic feature of human running, in which the linear-spring-like motion of the standing leg is produced by the joint stiffness of the knee and ankle. Although this model is widely used in robotics, it does not include human-like pelvic motion. In this study, we show that the pelvis actually contributes to the increase in jumping force and absorption of landing impact. On the basis of this finding, we propose a new model, SLIP2 (spring loaded inverted pendulum with pelvis, to improve running in humanoid robots. The model is composed of a body mass, a pelvis, and leg springs, and, it can control its springs while running by use of pelvic movement in the frontal plane. To achieve running motions, we developed a running control system that includes a pelvic oscillation controller to attain control over jumping power and a landing placement controller to adjust the running speed. We also developed a new running robot by using the SLIP2 model and performed hopping and running experiments to evaluate the model. The developed robot could accomplish hopping motions only by pelvic movement. The results also established that the difference between the pelvic rotational phase and the oscillation phase of the vertical mass displacement affects the jumping force. In addition, the robot demonstrated the ability to run with a foot placement controller depending on the reference running speed.

  15. Robotic Billiards: Understanding Humans in Order to Counter Them.

    Science.gov (United States)

    Nierhoff, Thomas; Leibrandt, Konrad; Lorenz, Tamara; Hirche, Sandra

    2016-08-01

    Ongoing technological advances in the areas of computation, sensing, and mechatronics enable robotic-based systems to interact with humans in the real world. To succeed against a human in a competitive scenario, a robot must anticipate the human behavior and include it in its own planning framework. Then it can predict the next human move and counter it accordingly, thus not only achieving overall better performance but also systematically exploiting the opponent's weak spots. Pool is used as a representative scenario to derive a model-based planning and control framework where not only the physics of the environment but also a model of the opponent is considered. By representing the game of pool as a Markov decision process and incorporating a model of the human decision-making based on studies, an optimized policy is derived. This enables the robot to include the opponent's typical game style into its tactical considerations when planning a stroke. The results are validated in simulations and real-life experiments with an anthropomorphic robot playing pool against a human.

  16. A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    OpenAIRE

    Mavridis, Nikolaos

    2014-01-01

    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-lookin...

  17. Investigation of the Impedance Characteristic of Human Arm for Development of Robots to Cooperate with Humans

    Science.gov (United States)

    Rahman, Md. Mozasser; Ikeura, Ryojun; Mizutani, Kazuki

    In the near future many aspects of our lives will be encompassed by tasks performed in cooperation with robots. The application of robots in home automation, agricultural production and medical operations etc. will be indispensable. As a result robots need to be made human-friendly and to execute tasks in cooperation with humans. Control systems for such robots should be designed to work imitating human characteristics. In this study, we have tried to achieve these goals by means of controlling a simple one degree-of-freedom cooperative robot. Firstly, the impedance characteristic of the human arm in a cooperative task is investigated. Then, this characteristic is implemented to control a robot in order to perform cooperative task with humans. A human followed the motion of an object, which is moved through desired trajectories. The motion is actuated by the linear motor of the one degree-of-freedom robot system. Trajectories used in the experiments of this method were minimum jerk (the rate of change of acceleration) trajectory, which was found during human and human cooperative task and optimum for muscle movement. As the muscle is mechanically analogous to a spring-damper system, a simple second-order equation is used as models for the arm dynamics. In the model, we considered mass, stiffness and damping factor. Impedance parameter is calculated from the position and force data obtained from the experiments and based on the “Estimation of Parametric Model”. Investigated impedance characteristic of human arm is then implemented to control a robot, which performed cooperative task with human. It is observed that the proposed control methodology has given human like movements to the robot for cooperating with human.

  18. Everyday robotic action: Lessons from human action control

    Directory of Open Access Journals (Sweden)

    Roy eDe Kleijn

    2014-03-01

    Full Text Available Robots are increasingly capable of performing everyday human activities such as cooking, cleaning, and doing the laundry. This requires the real-time planning and execution of complex, temporally-extended sequential actions under high degrees of uncertainty, which provides many challenges to traditional approaches to robot action control. We argue that important lessons in this respect can be learned from research on human action control. We provide a brief overview of available psychological insights into this issue and focus on four principles that we think could be particularly beneficial for robot control: the integration of symbolic and subsymbolic planning of action sequences, the integration of feedforward and feedback control, the clustering of complex actions into subcomponents, and the contextualization of action-control structures through goal representations.

  19. Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy.

    Science.gov (United States)

    Mohan, Mayumi; Mendonca, Rochelle; Johnson, Michelle J

    2017-07-01

    Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.

  20. Visual servo control for a human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-03-01

    Full Text Available This thesis presents work completed on the design of control and vision components for use in a monocular vision-based human-following robot. The use of vision in a controller feedback loop is referred to as vision-based or visual servo control...

  1. Human-Robot Interaction: Intention Recognition and Mutual Entrainment

    Science.gov (United States)

    2012-08-18

    bility, but only the human arm is modeled, with linear, low-pass-filter type transfer functions [16]. The coupled dynamics in pHRI has been intensively ...be located inside or on the edge of the polygon. IV. DISCUSSION A. Issues in Implementing LIPM on a Mobile Robot Instead of focusing on kinesiology

  2. Durable Tactile Glove for Human or Robot Hand

    Science.gov (United States)

    Butzer, Melissa; Diftler, Myron A.; Huber, Eric

    2010-01-01

    A glove containing force sensors has been built as a prototype of tactile sensor arrays to be worn on human hands and anthropomorphic robot hands. The force sensors of this glove are mounted inside, in protective pockets; as a result of this and other design features, the present glove is more durable than earlier models.

  3. Designing a Social Environment for Human-Robot Cooperation.

    Science.gov (United States)

    Amram, Fred M.

    Noting that work is partly a social activity, and that workers' psychological and emotional needs influence their productivity, this paper explores avenues for improving human-robot cooperation and for enhancing worker satisfaction in the environment of flexible automation. The first section of the paper offers a brief overview of the…

  4. Peer-to-Peer Human-Robot Interaction for Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2004-01-01

    NASA has embarked on a long-term program to develop human-robot systems for sustained, affordable space exploration. To support this mission, we are working to improve human-robot interaction and performance on planetary surfaces. Rather than building robots that function as glorified tools, our focus is to enable humans and robots to work as partners and peers. In this paper. we describe our approach, which includes contextual dialogue, cognitive modeling, and metrics-based field testing.

  5. Sampling Based Trajectory Planning for Robots in Dynamic Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    2010-01-01

    Open-ended human environments, such as pedestrian streets, hospital corridors, train stations etc., are places where robots start to emerge. Hence, being able to plan safe and natural trajectories in these dynamic environments is an important skill for future generations of robots. In this work...... the problem is formulated as planning a minimal cost trajectory through a potential field, defined from the perceived position and motion of persons in the environment. A modified Rapidlyexploring Random Tree (RRT) algorithm is proposed as a solution to the planning problem. The algorithm implements a new...... for the uncertainty in the dynamic environment. The planning algorithm is demonstrated in a simulated pedestrian street environment....

  6. 25th Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    Borangiu, Theodor

    2017-01-01

    This book presents the proceedings of the 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 held in Belgrade, Serbia, on June 30th–July 2nd, 2016. In keeping with the tradition of the event, RAAD 2016 covered all the important areas of research and innovation in new robot designs and intelligent robot control, with papers including Intelligent robot motion control; Robot vision and sensory processing; Novel design of robot manipulators and grippers; Robot applications in manufacturing and services; Autonomous systems, humanoid and walking robots; Human–robot interaction and collaboration; Cognitive robots and emotional intelligence; Medical, human-assistive robots and prosthetic design; Robots in construction and arts, and Evolution, education, legal and social issues of robotics. For the first time in RAAD history, the themes cloud robots, legal and ethical issues in robotics as well as robots in arts were included in the technical program. The book is a valuable resource f...

  7. Sensing human hand motions for controlling dexterous robots

    Science.gov (United States)

    Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.

    1988-01-01

    The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.

  8. Dynamic perceptions of human-likeness while interacting with a social robot

    NARCIS (Netherlands)

    Ruijten, P.A.M.; Cuijpers, R.H.

    2017-01-01

    In human-robot interaction research, much attention is given to the development of socially assistive robots that can have natural interactions with their users. One crucial aspect of such natural interactions is that the robot is perceived as human-like. Much research already exists that

  9. Artificial companions: empathy and vulnerability mirroring in human-robot relations

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2010-01-01

    Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends (among other things) on the robot’s capacity to be a recipient of human empathy, and

  10. Interaction debugging : an integral approach to analyze human-robot interaction

    NARCIS (Netherlands)

    Kooijmans, T.; Kanda, T.; Bartneck, C.; Ishiguro, H.; Hagita, N.

    2006-01-01

    Along with the development of interactive robots, controlled experiments and field trials are regularly conducted to stage human-robot interaction. Experience in this field has shown that analyzing human-robot interaction for evaluation purposes fosters the development of improved systems and the

  11. Human Factors Consideration for the Design of Collaborative Machine Assistants

    Science.gov (United States)

    Park, Sung; Fisk, Arthur D.; Rogers, Wendy A.

    Recent improvements in technology have facilitated the use of robots and virtual humans not only in entertainment and engineering but also in the military (Hill et al., 2003), healthcare (Pollack et al., 2002), and education domains (Johnson, Rickel, & Lester, 2000). As active partners of humans, such machine assistants can take the form of a robot or a graphical representation and serve the role of a financial assistant, a health manager, or even a social partner. As a result, interactive technologies are becoming an integral component of people's everyday lives.

  12. Human-automation collaboration in manufacturing: identifying key implementation factors

    OpenAIRE

    Charalambous, George; Fletcher, Sarah; Webb, Philip

    2013-01-01

    Human-automation collaboration refers to the concept of human operators and intelligent automation working together interactively within the same workspace without conventional physical separation. This concept has commanded significant attention in manufacturing because of the potential applications, such as the installation of large sub-assemblies. However, the key human factors relevant to human-automation collaboration have not yet been fully investigated. To maximise effective implement...

  13. Robot sex and consent : Is consent to sex between a robot and a human conceivable, possible, and desirable?

    NARCIS (Netherlands)

    Frank, L.; Nyholm, S.

    2017-01-01

    The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper

  14. Acquisition of Human Operation Characteristics for Kite-based Tethered Flying Robot using Human Operation Data

    OpenAIRE

    Todoroki, Chiaki; Takahashi, Yasutake; Nakamura, Takayuki

    2015-01-01

    This paper shows human skill acquisition systems to control the kite-based tethered flying robot. The kite-based tethered flying robot has been proposed as a flying observation system with long-term activity capability[1]. It is a relatively new system and aimed to complement other information gathering systems using a balloon or an air vehicle. This paper shows some approaches of human operation characteristics acquisition based on fuzzy learning controller, knearest neighbor algorithm, and ...

  15. Systematic Analysis of Video Data from Different Human-Robot Interaction Studies: A Categorisation of Social Signals During Error Situations

    OpenAIRE

    Manuel eGiuliani; Nicole eMirnig; Gerald eStollnberger; Susanne eStadler; Roland eBuchner; Manfred eTscheligi

    2015-01-01

    Human?robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human?robot interaction experiments. For that, we analyzed 201 videos of five human?robot interaction user studies with varying tasks from four independent projects. The analysis shows tha...

  16. Robot Control Overview: An Industrial Perspective

    Directory of Open Access Journals (Sweden)

    T. Brogårdh

    2009-07-01

    Full Text Available One key competence for robot manufacturers is robot control, defined as all the technologies needed to control the electromechanical system of an industrial robot. By means of modeling, identification, optimization, and model-based control it is possible to reduce robot cost, increase robot performance, and solve requirements from new automation concepts and new application processes. Model-based control, including kinematics error compensation, optimal servo reference- and feed-forward generation, and servo design, tuning, and scheduling, has meant a breakthrough for the use of robots in industry. Relying on this breakthrough, new automation concepts such as high performance multi robot collaboration and human robot collaboration can be introduced. Robot manufacturers can build robots with more compliant components and mechanical structures without loosing performance and robots can be used also in applications with very high performance requirements, e.g., in assembly, machining, and laser cutting. In the future it is expected that the importance of sensor control will increase, both with respect to sensors in the robot structure to increase the control performance of the robot itself and sensors outside the robot related to the applications and the automation systems. In this connection sensor fusion and learning functionalities will be needed together with the robot control for easy and intuitive installation, programming, and maintenance of industrial robots.

  17. Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.

    Science.gov (United States)

    Qureshi, Ahmed Hussain; Nakamura, Yutaka; Yoshikawa, Yuichiro; Ishiguro, Hiroshi

    2018-03-26

    For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Automation and Robotics for Human Mars Exploration (AROMA)

    Science.gov (United States)

    Hofmann, Peter; von Richter, Andreas

    2003-01-01

    Automation and Robotics (A&R) systems are a key technology for Mars exploration. All over the world initiatives in this field aim at developing new A&R systems and technologies for planetary surface exploration. From December 2000 to February 2002 Kayser-Threde GmbH, Munich, Germany lead a study called AROMA (Automation and Robotics for Human Mars Exploration) under ESA contract in order to define a reference architecture of A&R elements in support of a human Mars exploration program. One of the goals of this effort is to initiate new developments and to maintain the competitiveness of European industry within this field. c2003 Published by Elsevier Science Ltd.

  19. Robotics

    International Nuclear Information System (INIS)

    Scheide, A.W.

    1983-01-01

    This article reviews some of the technical areas and history associated with robotics, provides information relative to the formation of a Robotics Industry Committee within the Industry Applications Society (IAS), and describes how all activities relating to robotics will be coordinated within the IEEE. Industrial robots are being used for material handling, processes such as coating and arc welding, and some mechanical and electronics assembly. An industrial robot is defined as a programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for a variety of tasks. The initial focus of the Robotics Industry Committee will be on the application of robotics systems to the various industries that are represented within the IAS

  20. A Dual Launch Robotic and Human Lunar Mission Architecture

    Science.gov (United States)

    Jones, David L.; Mulqueen, Jack; Percy, Tom; Griffin, Brand; Smitherman, David

    2010-01-01

    This paper describes a comprehensive lunar exploration architecture developed by Marshall Space Flight Center's Advanced Concepts Office that features a science-based surface exploration strategy and a transportation architecture that uses two launches of a heavy lift launch vehicle to deliver human and robotic mission systems to the moon. The principal advantage of the dual launch lunar mission strategy is the reduced cost and risk resulting from the development of just one launch vehicle system. The dual launch lunar mission architecture may also enhance opportunities for commercial and international partnerships by using expendable launch vehicle services for robotic missions or development of surface exploration elements. Furthermore, this architecture is particularly suited to the integration of robotic and human exploration to maximize science return. For surface operations, an innovative dual-mode rover is presented that is capable of performing robotic science exploration as well as transporting human crew conducting surface exploration. The dual-mode rover can be deployed to the lunar surface to perform precursor science activities, collect samples, scout potential crew landing sites, and meet the crew at a designated landing site. With this approach, the crew is able to evaluate the robotically collected samples to select the best samples for return to Earth to maximize the scientific value. The rovers can continue robotic exploration after the crew leaves the lunar surface. The transportation system for the dual launch mission architecture uses a lunar-orbit-rendezvous strategy. Two heavy lift launch vehicles depart from Earth within a six hour period to transport the lunar lander and crew elements separately to lunar orbit. In lunar orbit, the crew transfer vehicle docks with the lander and the crew boards the lander for descent to the surface. After the surface mission, the crew returns to the orbiting transfer vehicle for the return to the Earth. This

  1. Quantifying and Maximizing Performance of a Human-Centric Robot under Precision, Safety, and Robot Specification Constraints

    Data.gov (United States)

    National Aeronautics and Space Administration — The research project is an effort towards achieving 99.99% safety of mobile robots working alongside humans while matching the precision performance of industrial...

  2. Cognitive Emotional Regulation Model in Human-Robot Interaction

    OpenAIRE

    Liu, Xin; Xie, Lun; Liu, Anqi; Li, Dan

    2015-01-01

    This paper integrated Gross cognitive process into the HMM (hidden Markov model) emotional regulation method and implemented human-robot emotional interaction with facial expressions and behaviors. Here, energy was the psychological driving force of emotional transition in the cognitive emotional model. The input facial expression was translated into external energy by expression-emotion mapping. Robot’s next emotional state was determined by the cognitive energy (the stimulus after cognition...

  3. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot.

    Science.gov (United States)

    Alexandrov, Alexei V; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free , scalar equations. This paper investigates whether the EM alternative shows "real-world robustness" against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive ("voluntary") movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  4. Singularity now: using the ventricular assist device as a model for future human-robotic physiology.

    Science.gov (United States)

    Martin, Archer K

    2016-04-01

    In our 21 st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today's world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named "IshBot") can best be studied in ventricular assist devices - VAD.

  5. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed

    Directory of Open Access Journals (Sweden)

    Anne Reuten

    2018-05-01

    Full Text Available Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley and comparable emotional responses between humans and robots (media equation. However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.

  6. Goal inferences about robot behavior : goal inferences and human response behaviors

    NARCIS (Netherlands)

    Broers, H.A.T.; Ham, J.R.C.; Broeders, R.; De Silva, P.; Okada, M.

    2014-01-01

    This explorative research focused on the goal inferences human observers draw based on a robot's behavior, and the extent to which those inferences predict people's behavior in response to that robot. Results show that different robot behaviors cause different response behavior from people.

  7. Ontological Reasoning for Human-Robot Teaming in Search and Rescue Missions

    NARCIS (Netherlands)

    Bagosi, T.; Hindriks, k.V.; Neerincx, M.A.

    2016-01-01

    In search and rescue missions robots are used to help rescue workers in exploring the disaster site. Our research focuses on how multiple robots and rescuers act as a team, and build up situation awareness. We propose a multi-agent system where each agent supports one member, either human or robot.

  8. Preparing for Humans at Mars, MPPG Updates to Strategic Knowledge Gaps and Collaboration with Science Missions

    Science.gov (United States)

    Baker, John; Wargo, Michael J.; Beaty, David

    2013-01-01

    The Mars Program Planning Group (MPPG) was an agency wide effort, chartered in March 2012 by the NASA Associate Administrator for Science, in collaboration with NASA's Associate Administrator for Human Exploration and Operations, the Chief Scientist, and the Chief Technologist. NASA tasked the MPPG to develop foundations for a program-level architecture for robotic exploration of Mars that is consistent with the President's challenge of sending humans to the Mars system in the decade of the 2030s and responsive to the primary scientific goals of the 2011 NRC Decadal Survey for Planetary Science. The Mars Exploration Program Analysis Group (MEPAG) also sponsored a Precursor measurement Strategy Analysis Group (P-SAG) to revisit prior assessments of required precursor measurements for the human exploration of Mars. This paper will discuss the key results of the MPPG and P-SAG efforts to update and refine our understanding of the Strategic Knowledge Gaps (SKGs) required to successfully conduct human Mars missions.

  9. The relation between people's attitudes and anxiety towards robots in human-robot interaction

    NARCIS (Netherlands)

    de Graaf, M.M.A.; Ben Allouch, Soumaya

    2013-01-01

    This paper examines the relation between an interaction with a robot and peoples’ attitudes and emotion towards robots. In our study, participants have had an acquaintance talk with a social robot and both their general attitude and anxiety towards social robots were measured before and after the

  10. Collaborative Assembly Operation between Two Modular Robots Based on the Optical Position Feedback

    Directory of Open Access Journals (Sweden)

    Liying Su

    2009-01-01

    Full Text Available This paper studies the cooperation between two master-slave modular robots. A cooperative robot system is set up with two modular robots and a dynamic optical meter-Optotrak. With Optotrak, the positions of the end effectors are measured as the optical position feedback, which is used to adjust the robots' end positions. A tri-layered motion controller is designed for the two cooperative robots. The RMRC control method is adopted to adjust the master robot to the desired position. With the kinematics constraints of the two robots including position and pose, joint velocity, and acceleration constraints, the two robots can cooperate well. A bolt and nut assembly experiment is executed to verify the methods.

  11. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction

    Science.gov (United States)

    2011-10-01

    directly affects the willingness of people to accept robot -produced information, follow robots ’ suggestions, and thus benefit from the advantages inherent...perceived complexity of operation). Consequently, if the perceived risk of using the robot exceeds its perceived benefit , practical operators almost...necessary presence of a human caregiver (Graf, Hans, & Schraft, 2004). Other robotic devices, such as wheelchairs (Yanco, 2001) and exoskeletons (e.g

  12. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    Science.gov (United States)

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  13. Robotics

    Energy Technology Data Exchange (ETDEWEB)

    Lorino, P; Altwegg, J M

    1985-05-01

    This article, which is aimed at the general reader, examines latest developments in, and the role of, modern robotics. The 7 main sections are sub-divided into 27 papers presented by 30 authors. The sections are as follows: 1) The role of robotics, 2) Robotics in the business world and what it can offer, 3) Study and development, 4) Utilisation, 5) Wages, 6) Conditions for success, and 7) Technological dynamics.

  14. Advanced Technologies for Robotic Exploration Leading to Human Exploration: Results from the SpaceOps 2015 Workshop

    Science.gov (United States)

    Lupisella, Mark L.; Mueller, Thomas

    2016-01-01

    This paper will provide a summary and analysis of the SpaceOps 2015 Workshop all-day session on "Advanced Technologies for Robotic Exploration, Leading to Human Exploration", held at Fucino Space Center, Italy on June 12th, 2015. The session was primarily intended to explore how robotic missions and robotics technologies more generally can help lead to human exploration missions. The session included a wide range of presentations that were roughly grouped into (1) broader background, conceptual, and high-level operations concepts presentations such as the International Space Exploration Coordination Group Roadmap, followed by (2) more detailed narrower presentations such as rover autonomy and communications. The broader presentations helped to provide context and specific technical hooks, and helped lay a foundation for the narrower presentations on more specific challenges and technologies, as well as for the discussion that followed. The discussion that followed the presentations touched on key questions, themes, actions and potential international collaboration opportunities. Some of the themes that were touched on were (1) multi-agent systems, (2) decentralized command and control, (3) autonomy, (4) low-latency teleoperations, (5) science operations, (6) communications, (7) technology pull vs. technology push, and (8) the roles and challenges of operations in early human architecture and mission concept formulation. A number of potential action items resulted from the workshop session, including: (1) using CCSDS as a further collaboration mechanism for human mission operations, (2) making further contact with subject matter experts, (3) initiating informal collaborative efforts to allow for rapid and efficient implementation, and (4) exploring how SpaceOps can support collaboration and information exchange with human exploration efforts. This paper will summarize the session and provide an overview of the above subjects as they emerged from the SpaceOps 2015

  15. Human-Robot Interaction Literature Review

    Science.gov (United States)

    2012-03-01

    electrical and electronics engineering, telecommunications and other technical fields. (IEEE, 2007) NTIS – National Technical Information Service NTIS...of a task is automated. However, performance recovery is much lower if automation fails and the human is excluded from the implementation of the task...Vincenzi, D.A. (2009). Effects of Systems Automation Management STartegies and Milti-mission Operator-to-vehicle Ratio on Operator Performance in

  16. The Age of Human-Robot Collaboration: Deep Sea Exploration

    KAUST Repository

    Khatib, Oussama

    2018-01-01

    The promise of oceanic discovery has intrigued scientists and explorers for centuries, whether to study underwater ecology and climate change, or to uncover natural resources and historic secrets buried deep at archaeological sites. Reaching

  17. Motor contagion during human-human and human-robot interaction.

    Science.gov (United States)

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  18. Motor contagion during human-human and human-robot interaction.

    Directory of Open Access Journals (Sweden)

    Ambra Bisio

    Full Text Available Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot. After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  19. [Human-robot global Simulink modeling and analysis for an end-effector upper limb rehabilitation robot].

    Science.gov (United States)

    Liu, Yali; Ji, Linhong

    2018-02-01

    Robot rehabilitation has been a primary therapy method for the urgent rehabilitation demands of paralyzed patients after a stroke. The parameters in rehabilitation training such as the range of the training, which should be adjustable according to each participant's functional ability, are the key factors influencing the effectiveness of rehabilitation therapy. Therapists design rehabilitation projects based on the semiquantitative functional assessment scales and their experience. But these therapies based on therapists' experience cannot be implemented in robot rehabilitation therapy. This paper modeled the global human-robot by Simulink in order to analyze the relationship between the parameters in robot rehabilitation therapy and the patients' movement functional abilities. We compared the shoulder and elbow angles calculated by simulation with the angles recorded by motion capture system while the healthy subjects completed the simulated action. Results showed there was a remarkable correlation between the simulation data and the experiment data, which verified the validity of the human-robot global Simulink model. Besides, the relationship between the circle radius in the drawing tasks in robot rehabilitation training and the active movement degrees of shoulder as well as elbow was also matched by a linear, which also had a remarkable fitting coefficient. The matched linear can be a quantitative reference for the robot rehabilitation training parameters.

  20. Region of eye contact of humanoid Nao robot is similar to that of a human

    NARCIS (Netherlands)

    Cuijpers, R.H.; Pol, van der D.; Herrmann, G.; Pearson, M.J.; Lenz, A.; Bremner, P.; Spiers, A.; Leonards, U.

    2013-01-01

    Eye contact is an important social cue in human-human interaction, but it is unclear how easily it carries over to humanoid robots. In this study we investigated whether the tolerance of making eye contact is similar for the Nao robot as compared to human lookers. We measured the region of eye

  1. Modeling Mixed Groups of Humans and Robots with Reflexive Game Theory

    Science.gov (United States)

    Tarasenko, Sergey

    The Reflexive Game Theory is based on decision-making principles similar to the ones used by humans. This theory considers groups of subjects and allows to predict which action from the set each subject in the group will choose. It is possible to influence subject's decision in a way that he will make a particular choice. The purpose of this study is to illustrate how robots can refrain humans from risky actions. To determine the risky actions, the Asimov's Three Laws of robotics are employed. By fusing the RGT's power to convince humans on the mental level with Asimov's Laws' safety, we illustrate how robots in the mixed groups of humans and robots can influence on human subjects in order to refrain humans from risky actions. We suggest that this fusion has a potential to device human-like motor behaving and looking robots with the human-like decision-making algorithms.

  2. Effect of a human-type communication robot on cognitive function in elderly women living alone.

    Science.gov (United States)

    Tanaka, Masaaki; Ishii, Akira; Yamano, Emi; Ogikubo, Hiroki; Okazaki, Masatsugu; Kamimura, Kazuro; Konishi, Yasuharu; Emoto, Shigeru; Watanabe, Yasuyoshi

    2012-09-01

    Considering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone. In this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing. The Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group. This study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.

  3. Action and language integration: from humans to cognitive robots.

    Science.gov (United States)

    Borghi, Anna M; Cangelosi, Angelo

    2014-07-01

    The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction. Copyright © 2014 Cognitive Science Society, Inc.

  4. Adaptive Human-Aware Robot Navigation in Close Proximity to Humans

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2011-01-01

    For robots to be able coexist with people in future everyday human environments, they must be able to act in a safe, natural and comfortable way. This work addresses the motion of a mobile robot in an environment, where humans potentially want to interact with it. The designed system consists...... system that uses a potential field to derive motion that respects the personʹs social zones and perceived interest in interaction. The operation of the system is evaluated in a controlled scenario in an open hall environment. It is demonstrated that the robot is able to learn to estimate if a person...... wishes to interact, and that the system is capable of adapting to changing behaviours of the humans in the environment....

  5. INTEGRATED ROBOT-HUMAN CONTROL IN MINING OPERATIONS

    Energy Technology Data Exchange (ETDEWEB)

    George Danko

    2005-04-01

    This report contains a detailed description of the work conducted in the first year of the project on Integrated Robot-Human Control in Mining Operations at University of Nevada, Reno. This project combines human operator control with robotic control concepts to create a hybrid control architecture, in which the strengths of each control method are combined to increase machine efficiency and reduce operator fatigue. The kinematics reconfiguration type differential control of the excavator implemented with a variety of ''software machine kinematics'' is the key feature of the project. This software re-configured excavator is more desirable to execute a given digging task. The human operator retains the master control of the main motion parameters, while the computer coordinates the repetitive movement patterns of the machine links. These repetitive movements may be selected from a pre-defined family of trajectories with different transformations. The operator can make adjustments to this pattern in real time, as needed, to accommodate rapidly-changing environmental conditions. A Bobcat{reg_sign} 435 excavator was retrofitted with electro-hydraulic control valve elements. The modular electronic control was tested and the basic valve characteristics were measured for each valve at the Robotics Laboratory at UNR. Position sensors were added to the individual joint control actuators, and the sensors were calibrated. An electronic central control system consisting of a portable computer, converters and electronic driver components was interfaced to the electro-hydraulic valves and position sensors. The machine is operational with or without the computer control system depending on whether the computer interface is on or off. In preparation for emulated mining tasks tests, typical, repetitive tool trajectories during surface mining operations were recorded at the Newmont Mining Corporation's ''Lone Tree'' mine in Nevada.

  6. Reflex control of robotic gait using human walking data.

    Directory of Open Access Journals (Sweden)

    Catherine A Macleod

    Full Text Available Control of human walking is not thoroughly understood, which has implications in developing suitable strategies for the retraining of a functional gait following neurological injuries such as spinal cord injury (SCI. Bipedal robots allow us to investigate simple elements of the complex nervous system to quantify their contribution to motor control. RunBot is a bipedal robot which operates through reflexes without using central pattern generators or trajectory planning algorithms. Ground contact information from the feet is used to activate motors in the legs, generating a gait cycle visually similar to that of humans. Rather than developing a more complicated biologically realistic neural system to control the robot's stepping, we have instead further simplified our model by measuring the correlation between heel contact and leg muscle activity (EMG in human subjects during walking and from this data created filter functions transferring the sensory data into motor actions. Adaptive filtering was used to identify the unknown transfer functions which translate the contact information into muscle activation signals. Our results show a causal relationship between ground contact information from the heel and EMG, which allows us to create a minimal, linear, analogue control system for controlling walking. The derived transfer functions were applied to RunBot II as a proof of concept. The gait cycle produced was stable and controlled, which is a positive indication that the transfer functions have potential for use in the control of assistive devices for the retraining of an efficient and effective gait with potential applications in SCI rehabilitation.

  7. Contrasting Web Robot and Human Behaviors with Network Models

    OpenAIRE

    Brown, Kyle; Doran, Derek

    2018-01-01

    The web graph is a commonly-used network representation of the hyperlink structure of a website. A network of similar structure to the web graph, which we call the session graph has properties that reflect the browsing habits of the agents in the web server logs. In this paper, we apply session graphs to compare the activity of humans against web robots or crawlers. Understanding these properties will enable us to improve models of HTTP traffic, which can be used to predict and generate reali...

  8. Mini AERCam Inspection Robot for Human Space Missions

    Science.gov (United States)

    Fredrickson, Steven E.; Duran, Steve; Mitchell, Jennifer D.

    2004-01-01

    The Engineering Directorate of NASA Johnson Space Center has developed a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam free flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35 pound, 14 inch AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations including automatic stationkeeping and point-to-point maneuvering. Mini AERCam is designed to fulfill the unique requirements and constraints associated with using a free flyer to perform external inspections and remote viewing of human spacecraft operations. This paper describes the application of Mini AERCam for stand-alone spacecraft inspection, as well as for roles on teams of humans and robots conducting future space exploration missions.

  9. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence

    DEFF Research Database (Denmark)

    Liu, Hin-Yan; Zawieska, Karolina

    2017-01-01

    impulse by proposing a complementary set of human rights directed specifically against the harms arising from robotic and artificial intelligence (AI) technologies. The relationship between responsibilities of the agent and the rights of the patient suggest that a rights regime is the other side...... to act responsibly. This subsists within a larger phenomenon where the difference between humans and non-humans, be it animals or artificial systems, appears to be increasingly blurred, thereby disrupting orthodox understandings of responsibility. This paper seeks to supplement the responsible robotics...

  10. Becoming Earth Independent: Human-Automation-Robotics Integration Challenges for Future Space Exploration

    Science.gov (United States)

    Marquez, Jessica J.

    2016-01-01

    Future exploration missions will require NASA to integrate more automation and robotics in order to accomplish mission objectives. This presentation will describe on the future challenges facing the human operator (astronaut, ground controllers) as we increase the amount of automation and robotics in spaceflight operations. It will describe how future exploration missions will have to adapt and evolve in order to deal with more complex missions and communication latencies. This presentation will outline future human-automation-robotic integration challenges.

  11. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    Science.gov (United States)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  12. Visual exploration and analysis of human-robot interaction rules

    Science.gov (United States)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming

  13. Exploring cultural factors in human-robot interaction : A matter of personality?

    NARCIS (Netherlands)

    Weiss, Astrid; Evers, Vanessa

    2011-01-01

    This paper proposes an experimental study to investigate task-dependence and cultural-background dependence of the personality trait attribution on humanoid robots. In Human-Robot Interaction, as well as in Human-Agent Interaction research, the attribution of personality traits towards intelligent

  14. The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving

    Science.gov (United States)

    Okita, Sandra

    2015-01-01

    Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…

  15. Moving NASA Beyond Low Earth Orbit: Future Human-Automation-Robotic Integration Challenges

    Science.gov (United States)

    Marquez, Jessica

    2016-01-01

    This presentation will provide an overview of current human spaceflight operations. It will also describe how future exploration missions will have to adapt and evolve in order to deal with more complex missions and communication latencies. Additionally, there are many implications regarding advanced automation and robotics, and this presentation will outline future human-automation-robotic integration challenges.

  16. INTEGRATED ROBOT-HUMAN CONTROL IN MINING OPERATIONS

    Energy Technology Data Exchange (ETDEWEB)

    George Danko

    2006-04-01

    This report describes the results of the 2nd year of a research project on the implementation of a novel human-robot control system for hydraulic machinery. Sensor and valve re-calibration experiments were conducted to improve open loop machine control. A Cartesian control example was tested both in simulation and on the machine; the results are discussed in detail. The machine tests included open-loop as well as closed-loop motion control. Both methods worked reasonably well, due to the high-quality electro-hydraulic valves used on the experimental machine. Experiments on 3-D analysis of the bucket trajectory using marker tracking software are also presented with the results obtained. Open-loop control is robustly stable and free of short-term dynamic problems, but it allows for drifting away from the desired motion kinematics of the machine. A novel, closed-loop control adjustment provides a remedy, while retaining much of the advantages of the open-loop control based on kinematics transformation. Additional analysis of previously recorded, three-dimensional working trajectories of the bucket of large mine shovels was completed. The motion patterns, when transformed into a family of curves, serve as the basis for software-controlled machine kinematics transformation in the new human-robot control system.

  17. Robotic surgical education: a collaborative approach to training postgraduate urologists and endourology fellows.

    Science.gov (United States)

    Mirheydar, Hossein; Jones, Marklyn; Koeneman, Kenneth S; Sweet, Robert M

    2009-01-01

    Currently, robotic training for inexperienced, practicing surgeons is primarily done vis-à-vis industry and/or society-sponsored day or weekend courses, with limited proctorship opportunities. The objective of this study was to assess the impact of an extended-proctorship program at up to 32 months of follow-up. An extended-proctorship program for robotic-assisted laparoscopic radical prostatectomy was established at our institution. The curriculum consisted of 3 phases: (1) completing an Intuitive Surgical 2-day robotic training course with company representatives; (2) serving as assistant to a trained proctor on 5 to 6 cases; and (3) performing proctored cases up to 1 year until confidence was achieved. Participants were surveyed and asked to evaluate on a 5-point Likert scale their operative experience in robotics and satisfaction regarding their training. Nine of 9 participants are currently performing robotic-assisted laparoscopic radical prostatectomy (RALP) independently. Graduates of our program have performed 477 RALP cases. The mean number of cases performed within phase 3 was 20.1 (range, 5 to 40) prior to independent practice. The program received a rating of 4.2/5 for effectiveness in teaching robotic surgery skills. Our robotic program, with extended proctoring, has led to an outstanding take-rate for disseminating robotic skills in a metropolitan community.

  18. Strategies for human-driven robot comprehension of spatial descriptions by older adults in a robot fetch task.

    Science.gov (United States)

    Carlson, Laura; Skubic, Marjorie; Miller, Jared; Huo, Zhiyu; Alexenko, Tatiana

    2014-07-01

    This contribution presents a corpus of spatial descriptions and describes the development of a human-driven spatial language robot system for their comprehension. The domain of application is an eldercare setting in which an assistive robot is asked to "fetch" an object for an elderly resident based on a natural language spatial description given by the resident. In Part One, we describe a corpus of naturally occurring descriptions elicited from a group of older adults within a virtual 3D home that simulates the eldercare setting. We contrast descriptions elicited when participants offered descriptions to a human versus robot avatar, and under instructions to tell the addressee how to find the target versus where the target is. We summarize the key features of the spatial descriptions, including their dynamic versus static nature and the perspective adopted by the speaker. In Part Two, we discuss critical cognitive and perceptual processing capabilities necessary for the robot to establish a common ground with the human user and perform the "fetch" task. Based on the collected corpus, we focus here on resolving the perspective ambiguity and recognizing furniture items used as landmarks in the descriptions. Taken together, the work presented here offers the key building blocks of a robust system that takes as input natural spatial language descriptions and produces commands that drive the robot to successfully fetch objects within our eldercare scenario. Copyright © 2014 Cognitive Science Society, Inc.

  19. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Directory of Open Access Journals (Sweden)

    Kristel Knaepen

    Full Text Available In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support. Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  20. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Science.gov (United States)

    Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain

    2015-01-01

    In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  1. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    Science.gov (United States)

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  2. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Thomas Mergner

    2017-04-01

    Full Text Available Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary” movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  3. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  4. Modeling and Simulation for Exploring Human-Robot Team Interaction Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dudenhoeffer, Donald Dean; Bruemmer, David Jonathon; Davis, Midge Lee

    2001-12-01

    Small-sized and micro-robots will soon be available for deployment in large-scale forces. Consequently, the ability of a human operator to coordinate and interact with largescale robotic forces is of great interest. This paper describes the ways in which modeling and simulation have been used to explore new possibilities for human-robot interaction. The paper also discusses how these explorations have fed implementation of a unified set of command and control concepts for robotic force deployment. Modeling and simulation can play a major role in fielding robot teams in actual missions. While live testing is preferred, limitations in terms of technology, cost, and time often prohibit extensive experimentation with physical multi-robot systems. Simulation provides insight, focuses efforts, eliminates large areas of the possible solution space, and increases the quality of actual testing.

  5. Enhancing the effectiveness of human-robot teaming with a closed-loop system.

    Science.gov (United States)

    Teo, Grace; Reinerman-Jones, Lauren; Matthews, Gerald; Szalma, James; Jentsch, Florian; Hancock, Peter

    2018-02-01

    With technological developments in robotics and their increasing deployment, human-robot teams are set to be a mainstay in the future. To develop robots that possess teaming capabilities, such as being able to communicate implicitly, the present study implemented a closed-loop system. This system enabled the robot to provide adaptive aid without the need for explicit commands from the human teammate, through the use of multiple physiological workload measures. Such measures of workload vary in sensitivity and there is large inter-individual variability in physiological responses to imposed taskload. Workload models enacted via closed-loop system should accommodate such individual variability. The present research investigated the effects of the adaptive robot aid vs. imposed aid on performance and workload. Results showed that adaptive robot aid driven by an individualized workload model for physiological response resulted in greater improvements in performance compared to aid that was simply imposed by the system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction.

    Science.gov (United States)

    Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank

    2017-12-14

    Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.

  7. A journey from robot to digital human mathematical principles and applications with MATLAB programming

    CERN Document Server

    Gu, Edward Y L

    2013-01-01

    This book provides readers with a solid set of diversified and essential tools for the theoretical modeling and control of complex robotic systems, as well as for digital human modeling and realistic motion generation. Following a comprehensive introduction to the fundamentals of robotic kinematics, dynamics and control systems design, the author extends robotic modeling procedures and motion algorithms to a much higher-dimensional, larger scale and more sophisticated research area, namely digital human modeling. Most of the methods are illustrated by MATLAB™ codes and sample graphical visualizations, offering a unique closed loop between conceptual understanding and visualization. Readers are guided through practicing and creating 3D graphics for robot arms as well as digital human models in MATLAB™, and through driving them for real-time animation. This work is intended to serve as a robotics textbook with an extension to digital human modeling for senior undergraduate and graduate engineering students....

  8. Mars - The relationship of robotic and human elements in the IAA International Exploration of Mars study

    Science.gov (United States)

    Marov, Mikhail YA.; Duke, Michael B.

    1993-01-01

    The roles of human and robotic missions in Mars exploration are defined in the context of the short- and long-term Mars programs. In particular, it is noted that the currently implemented and planned missions to Mars can be regarded as robotic precursor missions to human exploration. Attention is given to factors that must be considered in formulating the rationale for human flights to Mars and future human Mars settlements and justifying costly projects.

  9. Training Humanities Doctoral Students in Collaborative and Digital Multimedia

    Science.gov (United States)

    Ensslin, Astrid; Slocombe, Will

    2012-01-01

    This study reports on the pedagogic rationale, didactic design and implications of an AHRC-funded doctoral training scheme in collaborative and digital multimedia in the humanities. In the second part of this article we discuss three areas of provision that were identified as particularly significant and/or controversial. These include (1) desktop…

  10. Collaboration in the Humanities, Arts and Social Sciences in Australia

    Science.gov (United States)

    Haddow, Gaby; Xia, Jianhong; Willson, Michele

    2017-01-01

    This paper reports on the first large-scale quantitative investigation into collaboration, demonstrated in co-authorship, by Australian humanities, arts and social sciences (HASS) researchers. Web of Science data were extracted for Australian HASS publications, with a focus on the softer social sciences, over the period 2004-2013. The findings…

  11. Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.

    Science.gov (United States)

    Borenstein, Jason; Arkin, Ron

    2016-02-01

    Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to "nudge" their human users in the direction of being "more ethical". More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture "socially just" tendencies in their human counterparts. Designing technological artifacts in such a way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation.

  12. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures.

    Science.gov (United States)

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-07-21

    The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.

  13. Compliance control based on PSO algorithm to improve the feeling during physical human-robot interaction.

    Science.gov (United States)

    Jiang, Zhongliang; Sun, Yu; Gao, Peng; Hu, Ying; Zhang, Jianwei

    2016-01-01

    Robots play more important roles in daily life and bring us a lot of convenience. But when people work with robots, there remain some significant differences in human-human interactions and human-robot interaction. It is our goal to make robots look even more human-like. We design a controller which can sense the force acting on any point of a robot and ensure the robot can move according to the force. First, a spring-mass-dashpot system was used to describe the physical model, and the second-order system is the kernel of the controller. Then, we can establish the state space equations of the system. In addition, the particle swarm optimization algorithm had been used to obtain the system parameters. In order to test the stability of system, the root-locus diagram had been shown in the paper. Ultimately, some experiments had been carried out on the robotic spinal surgery system, which is developed by our team, and the result shows that the new controller performs better during human-robot interaction.

  14. Robotics

    Indian Academy of Sciences (India)

    netic induction to detect an object. The development of ... end effector, inclination of object, magnetic and electric fields, etc. The sensors described ... In the case of a robot, the various actuators and motors have to be modelled. The major ...

  15. US Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance 2014 Capstone Experiment

    Science.gov (United States)

    2016-07-01

    42 Fig. 31 (left) The experimental setup for terrain classification using PreSRS on the Hopper . (right) A computer-aided design schematic of the... Hopper with PreSRS attached to the bottom of the robot foot. ........................44 Fig. 32 Plots of terrain classification accuracy vs. sensor...2.1 Robotics CTA The RCTA is a fundamental research program that began in 2010 and enables Government, industrial , and academic institutions to

  16. Project based, Collaborative, Algorithmic Robotics for High School Students: Programming Self Driving Race Cars at MIT

    Science.gov (United States)

    2017-02-19

    new high-school STEM program in robotics. The program utilizes state -of-the- art sensors and embedded computers for mobile robotics. These...software. Students do not engage in hardware design or development. They are given a hardware kit that includes state -of-the- art sensors and... Engineering and Computer Science (under course number 6.141) and the Department of Aeronautics and Astronautics (under course number 16.405). Let us

  17. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  18. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development

    Directory of Open Access Journals (Sweden)

    Shanee Honig

    2018-06-01

    Full Text Available While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI, and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI, human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP, that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1 communicating failures, (2 perception and comprehension of failures, and (3 solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a

  19. Vocal Interactivity in-and-between Humans, Animals and Robots

    Directory of Open Access Journals (Sweden)

    Roger K Moore

    2016-10-01

    Full Text Available Almost all animals exploit vocal signals for a range of ecologically-motivated purposes: detecting predators prey and marking territory, expressing emotions, establishing social relations and sharing information. Whether it is a bird raising an alarm, a whale calling to potential partners,a dog responding to human commands, a parent reading a story with a child, or a business-person accessing stock prices using emph{Siri}, vocalisation provides a valuable communication channel through which behaviour may be coordinated and controlled, and information may be distributed and acquired.Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human-machine interaction. Opportunities for cross-fertilisation between these fields abound; for example, using artificial cognitive agents to investigate contemporary theories of language grounding, using machine learning to analyse different habitats or adding vocal expressivity to the next generation of language-enabled autonomous social agents. However, much of the research is conducted within well-defined disciplinary boundaries, and many fundamental issues remain. This paper attempts to redress the balance by presenting a comparative review of vocal interaction within-and-between humans, animals and artificial agents (such as robots, and it identifies a rich set of open research questions that may benefit from an inter-disciplinary analysis.

  20. The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction

    Science.gov (United States)

    2009-03-01

    distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita

  1. Human Exploration using Real-Time Robotic Operations (HERRO): A space exploration strategy for the 21st century

    Science.gov (United States)

    Schmidt, George R.; Landis, Geoffrey A.; Oleson, Steven R.

    2012-11-01

    This paper presents an exploration strategy for human missions beyond Low Earth Orbit (LEO) and the Moon that combines the best features of human and robotic spaceflight. This "Human Exploration using Real-time Robotic Operations" (HERRO) strategy refrains from placing humans on the surfaces of the Moon and Mars in the near-term. Rather, it focuses on sending piloted spacecraft and crews into orbit around Mars and other exploration targets of interest, and conducting astronaut exploration of the surfaces using telerobots and remotely-controlled systems. By eliminating the significant communications delay or "latency" with Earth due to the speed of light limit, teleoperation provides scientists real-time control of rovers and other sophisticated instruments. This in effect gives them a "virtual presence" on planetary surfaces, and thus expands the scientific return at these destinations. HERRO mitigates several of the major issues that have hindered the progress of human spaceflight beyond Low Earth Orbit (LEO) by: (1) broadening the range of destinations for near-term human missions; (2) reducing cost and risk through less complexity and fewer man-rated elements; (3) offering benefits of human-equivalent in-situ cognition, decision-making and field-work on planetary bodies; (4) providing a simpler approach to returning samples from Mars and planetary surfaces; and (5) facilitating opportunities for international collaboration through contribution of diverse robotic systems. HERRO provides a firm justification for human spaceflight—one that expands the near-term capabilities of scientific exploration while providing the space transportation infrastructure needed for eventual human landings in the future.

  2. Human-robot interaction tests on a novel robot for gait assistance.

    Science.gov (United States)

    Tagliamonte, Nevio Luigi; Sergi, Fabrizio; Carpino, Giorgio; Accoto, Dino; Guglielmelli, Eugenio

    2013-06-01

    This paper presents tests on a treadmill-based non-anthropomorphic wearable robot assisting hip and knee flexion/extension movements using compliant actuation. Validation experiments were performed on the actuators and on the robot, with specific focus on the evaluation of intrinsic backdrivability and of assistance capability. Tests on a young healthy subject were conducted. In the case of robot completely unpowered, maximum backdriving torques were found to be in the order of 10 Nm due to the robot design features (reduced swinging masses; low intrinsic mechanical impedance and high-efficiency reduction gears for the actuators). Assistance tests demonstrated that the robot can deliver torques attracting the subject towards a predicted kinematic status.

  3. Human-robot cooperative movement training: Learning a novel sensory motor transformation during walking with robotic assistance-as-needed

    Directory of Open Access Journals (Sweden)

    Benitez Raul

    2007-03-01

    Full Text Available Abstract Background A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Methods Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. Results We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. Conclusion The assist

  4. Human-robot cooperative movement training: learning a novel sensory motor transformation during walking with robotic assistance-as-needed.

    Science.gov (United States)

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-03-28

    A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a

  5. Human-friendly robotic manipulators: safety and performance issues in controller design

    NARCIS (Netherlands)

    Tadele, T.S.

    2014-01-01

    Recent advances in robotics have spurred its adoption into new application areas such as medical, rescue, transportation, logistics, personal care and entertainment. In the personal care domain, robots are expected to operate in human-present environments and provide non-critical assistance.

  6. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    Czech Academy of Sciences Publication Activity Database

    Alexandrov, A.V.; Lippi, V.; Mergner, T.; Frolov, A. A.; Hettich, G.; Húsek, Dušan

    2017-01-01

    Roč. 11, 25 April (2017), č. článku 22. ISSN 1662-5188 Institutional support: RVO:67985807 Keywords : human sensorimotor system * neuromechanics * biorobotics * motor control * eigenmovements Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Robotics and automatic control Impact factor: 1.821, year: 2016

  7. Children Perseverate to a Human's Actions but Not to a Robot's Actions

    Science.gov (United States)

    Moriguchi, Yusuke; Kanda, Takayuki; Ishiguro, Hiroshi; Itakura, Shoji

    2010-01-01

    Previous research has shown that young children commit perseverative errors from their observation of another person's actions. The present study examined how social observation would lead children to perseverative tendencies, using a robot. In Experiment 1, preschoolers watched either a human model or a robot sorting cards according to one…

  8. Using Language Games as a Way to Investigate Interactional Engagement in Human-Robot Interaction

    DEFF Research Database (Denmark)

    Jensen, L. C.

    2016-01-01

    how students' engagement with a social robot can be systematically investigated and evaluated. For this purpose, I present a small user study in which a robot plays a word formation game with a human, in which engagement is determined by means of an analysis of the 'language games' played...

  9. Dragons, Ladybugs, and Softballs: Girls' STEM Engagement with Human-Centered Robotics

    Science.gov (United States)

    Gomoll, Andrea; Hmelo-Silver, Cindy E.; Šabanovic, Selma; Francisco, Matthew

    2016-01-01

    Early experiences in science, technology, engineering, and math (STEM) are important for getting youth interested in STEM fields, particularly for girls. Here, we explore how an after-school robotics club can provide informal STEM experiences that inspire students to engage with STEM in the future. Human-centered robotics, with its emphasis on the…

  10. Effects of eye contact and iconic gestures on message retention in human-robot interaction

    NARCIS (Netherlands)

    Dijk, van E.T.; Torta, E.; Cuijpers, R.H.

    2013-01-01

    The effects of iconic gestures and eye contact on message retention in human-robot interaction were investigated in a series of experiments. A humanoid robot gave short verbal messages to participants, accompanied either by iconic gestures or no gestures while making eye contact with the participant

  11. Impressions of Humanness for Android Robot May Represent an Endophenotype for Autism Spectrum Disorders

    Science.gov (United States)

    Kumazaki, Hirokazu; Warren, Zachary; Swanson, Amy; Yoshikawa, Yuichiro; Matsumoto, Yoshio; Ishiguro, Hiroshi; Sarkar, Nilanjan; Minabe, Yoshio; Kikuchi, Mitsuru

    2018-01-01

    Identification of meaningful endophenotypes may be critical to unraveling the etiology and pathophysiology of autism spectrum disorders (ASD). We investigated whether impressions of "humanness" for android robot might represent a candidate characteristic of an ASD endophenotype. We used a female type of android robot with an appearance…

  12. Making planned paths look more human-like in humanoid robot manipulation planning

    DEFF Research Database (Denmark)

    Zacharias, F.; Schlette, C.; Schmidt, F.

    2011-01-01

    It contradicts the human's expectations when humanoid robots move awkwardly during manipulation tasks. The unnatural motion may be caused by awkward start or goal configurations or by probabilistic path planning processes that are often used. This paper shows that the choice of an arm's target...... for the robot arm....

  13. Ghost-in-the-Machine reveals human social signals for human-robot interaction.

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P

    2015-01-01

    We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.

  14. Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction

    OpenAIRE

    Renner, Patrick; Pfeiffer, Thies

    2014-01-01

    For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular in...

  15. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    Science.gov (United States)

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  16. 24th International Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    2016-01-01

    This volume includes the Proceedings of the 24th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2015, which was held in Bucharest, Romania, on May 27-29, 2015. The Conference brought together academic and industry researchers in robotics from the 11 countries affiliated to the Alpe-Adria-Danube space: Austria, Croatia, Czech Republic, Germany, Greece, Hungary, Italy, Romania, Serbia, Slovakia and Slovenia, and their worldwide partners. According to its tradition, RAAD 2015 covered all important areas of research, development and innovation in robotics, including new trends such as: bio-inspired and cognitive robots, visual servoing of robot motion, human-robot interaction, and personal robots for ambient assisted living. The accepted papers have been grouped in nine sessions: Robot integration in industrial applications; Grasping analysis, dexterous grippers and component design; Advanced robot motion control; Robot vision and sensory control; Human-robot interaction and collaboration;...

  17. Framework for Human-Automation Collaboration: Conclusions from Four Studies

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); O' Hara, John [Brookhaven National Lab. (BNL), Upton, NY (United States); Joe, Jeffrey C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Whaley, April M. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Medema, Heather [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2013-11-01

    The Human Automation Collaboration (HAC) research project is investigating how advanced technologies that are planned for Advanced Small Modular Reactors (AdvSMR) will affect the performance and the reliability of the plant from a human factors and human performance perspective. The HAC research effort investigates the consequences of allocating functions between the operators and automated systems. More specifically, the research team is addressing how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. Oxstrand et al. (2013 - March) describes the efforts conducted by the researchers to identify the research needs for HAC. The research team reviewed the literature on HAC, developed a model of HAC, and identified gaps in the existing knowledge of human-automation collaboration. As described in Oxstrand et al. (2013 – June), the team then prioritized the research topics identified based on the specific needs in the context of AdvSMR. The prioritization was based on two sources of input: 1) The preliminary functions and tasks, and 2) The model of HAC. As a result, three analytical studies were planned and conduced; 1) Models of Teamwork, 2) Standardized HAC Performance Measurement Battery, and 3) Initiators and Triggering Conditions for Adaptive Automation. Additionally, one field study was also conducted at Idaho Falls Power.

  18. Android Robotics and the Conceptualization of Human Beings

    DEFF Research Database (Denmark)

    Nørskov, Marco; Platz, Anemone

    Japan has for decades been a first-mover and pacemaker with respect to the development of humanoid and android robots [1]. In this conceptual paper, we aim to demonstrate how certain android robotic projects can be embedded and interpreted within a Japanese notion of nature, where the artificial...... is not opposed to nature and where conventionalized idealizations in general are cherished over original state of the latter [2]. Furthermore, we will discuss how android robots epitomize challenges to the macro and micro levels of society. [1] J. Robertson, Robo Sapiens Japanicus: Robots, Gender, Family...

  19. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  20. Evolving robot empathy towards humans with motor disabilities through artificial pain generation

    Directory of Open Access Journals (Sweden)

    Muh Anshar

    2018-01-01

    Full Text Available In contact assistive robots, a prolonged physical engagement between robots and humans with motor disabilities due to shoulder injuries, for instance, may at times lead humans to experience pain. In this situation, robots will require sophisticated capabilities, such as the ability to recognize human pain in advance and generate counter-responses as follow up emphatic action. Hence, it is important for robots to acquire an appropriate pain concept that allows them to develop these capabilities. This paper conceptualizes empathy generation through the realization of synthetic pain classes integrated into a robot’s self-awareness framework, and the implementation of fault detection on the robot body serves as a primary source of pain activation. Projection of human shoulder motion into the robot arm motion acts as a fusion process, which is used as a medium to gather information for analyses then to generate corresponding synthetic pain and emphatic responses. An experiment is designed to mirror a human peer’s shoulder motion into an observer robot. The results demonstrate that the fusion takes place accurately whenever unified internal states are achieved, allowing accurate classification of synthetic pain categories and generation of empathy responses in a timely fashion. Future works will consider a pain activation mechanism development.

  1. A Meta-Analysis of Factors Influencing the Development of Human-Robot Trust

    Science.gov (United States)

    2011-12-01

    culture accounts for significant differences in trust ratings for robots; some collectivist cultures have higher trust ratings than individualistic ...HRI Occurs Other potential factors impacting trust in HRI are directly related to the environment in which HRI occurs. For example, the cultural ... cultures (Li et al., 2010). Our SMEs also indicated that team collaboration issues (e.g., communication, shared mental models) and tasking

  2. Integrated Robot-Human Control in Mining Operations

    Energy Technology Data Exchange (ETDEWEB)

    George Danko

    2007-09-30

    This report contains a detailed description of the work conducted for the project on Integrated Robot-Human Control in Mining Operations at University of Nevada, Reno. This project combines human operator control with robotic control concepts to create a hybrid control architecture, in which the strengths of each control method are combined to increase machine efficiency and reduce operator fatigue. The kinematics reconfiguration type differential control of the excavator implemented with a variety of 'software machine kinematics' is the key feature of the project. This software re-configured excavator is more desirable to execute a given digging task. The human operator retains the master control of the main motion parameters, while the computer coordinates the repetitive movement patterns of the machine links. These repetitive movements may be selected from a pre-defined family of trajectories with different transformations. The operator can make adjustments to this pattern in real time, as needed, to accommodate rapidly-changing environmental conditions. A working prototype has been developed using a Bobcat 435 excavator. The machine is operational with or without the computer control system depending on whether the computer interface is on or off. In preparation for emulated mining tasks tests, typical, repetitive tool trajectories during surface mining operations were recorded at the Newmont Mining Corporation's 'Lone Tree' mine in Nevada. Analysis of these working trajectories has been completed. The motion patterns, when transformed into a family of curves, may serve as the basis for software-controlled machine kinematics transformation in the new human-robot control system. A Cartesian control example has been developed and tested both in simulation and on the experimental excavator. Open-loop control is robustly stable and free of short-term dynamic problems, but it allows for drifting away from the desired motion kinematics of the

  3. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  4. Enabling Effective Human-Robot Interaction Using Perspective-Taking in Robots

    National Research Council Canada - National Science Library

    Trafton, J. G; Cassimatis, Nicholas L; Bugajska, Magdalena D; Brock, Derek P; Mintz, Farilee E; Schultz, Alan C

    2005-01-01

    ...) and present a cognitive architecture for performing perspective-taking called Polyscheme. Finally, we show a fully integrated system that instantiates our theoretical framework within a working robot system...

  5. Detecting and Classifying Human Touches in a Social Robot Through Acoustic Sensing and Machine Learning.

    Science.gov (United States)

    Alonso-Martín, Fernando; Gamboa-Montero, Juan José; Castillo, José Carlos; Castro-González, Álvaro; Salichs, Miguel Ángel

    2017-05-16

    An important aspect in Human-Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot's shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot's shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke , tap , slap , and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot's shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot's shell and plug it into the robot's computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F -score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures.

  6. Cooperation between humans and robots in fine assembly

    Science.gov (United States)

    Jalba, C. K.; Konold, P.; Rapp, I.; Mann, C.; Muminovic, A.

    2017-01-01

    The development of ever smaller components in manufacturing processes require handling, assembling and testing of miniature similar components. The human eye meets its optical limits with ongoing miniaturization of parts, due to the fact that it is not able to detect particles with a size smaller than 0.11 mm or register distances below 0.07 mm - like separating gaps. After several hours of labour, workers cannot accurately differentiate colour nuances as well as constant quality of work cannot be guaranteed. Assembly is usually done with tools, such as microscopes, magnifiers or digital measuring devices. Due to the enormous mental concentration, quickly a fatigue process sets in. This requires breaks or change of task and reduces productivity. Dealing with handling devices such as grippers, guide units and actuators for component assembling, requires a time consuming training process. Often productivity increase is first achieved after years of daily training. Miniaturizations are ubiquitously needed, for instance in the surgery. Very small add-on instruments must be provided. In measurement, e.g. it is a technological must and a competitive advantage, to determine required data with a small-as-possible, highest-possible-resolution sensor. Solution: The realization of a flexible universal workstation, using standard robotic systems and image processing devices in cooperation with humans, where workers are largely freed up from highly strenuous physical and fine motoric work, so that they can do productive work monitoring and adjusting the machine assisted production process.

  7. Collaboration by Design: Using Robotics to Foster Social Interaction in Kindergarten

    Science.gov (United States)

    Lee, Kenneth T. H.; Sullivan, Amanda; Bers, Marina U.

    2013-01-01

    Research shows the importance of social interaction between peers in child development. Although technology can foster peer interactions, teachers often struggle with teaching with technology. This study examined a sample of (n = 19) children participating in a kindergarten robotics summer workshop to determine the effect of teaching using a…

  8. How to Build a Robot: Collaborating to Strengthen STEM Programming in a Citywide System

    Science.gov (United States)

    Groome, Meghan; Rodríguez, Linda M.

    2014-01-01

    You have to stick with it. It takes time, patience, trial and error, failure, and persistence. It is almost never perfect or finished, but, with a good team, you can build something that works. These are the lessons youth learn when building a robot, as many do in the out-of-school time (OST) programs supported by the initiative described in this…

  9. Robots for Astrobiology!

    Science.gov (United States)

    Boston, Penelope J.

    2016-01-01

    The search for life and its study is known as astrobiology. Conducting that search on other planets in our Solar System is a major goal of NASA and other space agencies, and a driving passion of the community of scientists and engineers around the world. We practice for that search in many ways, from exploring and studying extreme environments on Earth, to developing robots to go to other planets and help us look for any possible life that may be there or may have been there in the past. The unique challenges of space exploration make collaborations between robots and humans essential. The products of those collaborations will be novel and driven by the features of wholly new environments. For space and planetary environments that are intolerable for humans or where humans present an unacceptable risk to possible biologically sensitive sites, autonomous robots or telepresence offer excellent choices. The search for life signs on Mars fits within this category, especially in advance of human landed missions there, but also as assistants and tools once humans reach the Red Planet. For planetary destinations where we do not envision humans ever going in person, like bitterly cold icy moons, or ocean worlds with thick ice roofs that essentially make them planetary-sized ice caves, we will rely on robots alone to visit those environments for us and enable us to explore and understand any life that we may find there. Current generation robots are not quite ready for some of the tasks that we need them to do, so there are many opportunities for roboticists of the future to advance novel types of mobility, autonomy, and bio-inspired robotic designs to help us accomplish our astrobiological goals. We see an exciting partnership between robotics and astrobiology continually strengthening as we jointly pursue the quest to find extraterrestrial life.

  10. Enhancing fuzzy robot navigation systems by mimicking human visual perception of natural terrain traversibility

    Science.gov (United States)

    Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.

    2001-01-01

    This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.

  11. Integrated Human-Robotic Missions to the Moon and Mars: Mission Operations Design Implications

    Science.gov (United States)

    Mishkin, Andrew; Lee, Young; Korth, David; LeBlanc, Troy

    2007-01-01

    For most of the history of space exploration, human and robotic programs have been independent, and have responded to distinct requirements. The NASA Vision for Space Exploration calls for the return of humans to the Moon, and the eventual human exploration of Mars; the complexity of this range of missions will require an unprecedented use of automation and robotics in support of human crews. The challenges of human Mars missions, including roundtrip communications time delays of 6 to 40 minutes, interplanetary transit times of many months, and the need to manage lifecycle costs, will require the evolution of a new mission operations paradigm far less dependent on real-time monitoring and response by an Earthbound operations team. Robotic systems and automation will augment human capability, increase human safety by providing means to perform many tasks without requiring immediate human presence, and enable the transfer of traditional mission control tasks from the ground to crews. Developing and validating the new paradigm and its associated infrastructure may place requirements on operations design for nearer-term lunar missions. The authors, representing both the human and robotic mission operations communities, assess human lunar and Mars mission challenges, and consider how human-robot operations may be integrated to enable efficient joint operations, with the eventual emergence of a unified exploration operations culture.

  12. Learning collaborative teamwork: an argument for incorporating the humanities.

    Science.gov (United States)

    Hall, Pippa; Brajtman, Susan; Weaver, Lynda; Grassau, Pamela Anne; Varpio, Lara

    2014-11-01

    A holistic, collaborative interprofessional team approach, which includes patients and families as significant decision-making members, has been proposed to address the increasing burden being placed on the health-care system. This project hypothesized that learning activities related to the humanities during clinical placements could enhance interprofessional teamwork. Through an interprofessional team of faculty, clinical staff, students, and patient representatives, we developed and piloted the self-learning module, "interprofessional education for collaborative person-centred practice through the humanities". The module was designed to provide learners from different professions and educational levels with a clinical placement/residency experience that would enable them, through a lens of the humanities, to better understand interprofessional collaborative person-centred care without structured interprofessional placement activities. Learners reported the self-paced and self-directed module to be a satisfactory learning experience in all four areas of care at our institution, and certain attitudes and knowledge were significantly and positively affected. The module's evaluation resulted in a revised edition providing improved structure and instruction for students with no experience in self-directed learning. The module was recently adapted into an interactive bilingual (French and English) online e-learning module to facilitate its integration into the pre-licensure curriculum at colleges and universities.

  13. The ultimatum game as measurement tool for anthropomorphism in human-robot interaction

    NARCIS (Netherlands)

    Torta, E.; Dijk, van E.T.; Ruijten, P.A.M.; Cuijpers, R.H.; Herrmann, G.; Pearson, M.J.; Lenz, A.; et al., xx

    2013-01-01

    Anthropomorphism is the tendency to attribute human characteristics to non–human entities. This paper presents exploratory work to evaluate how human responses during the ultimatum game vary according to the level of anthropomorphism of the opponent, which was either a human, a humanoid robot or a

  14. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  15. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human-Robot Interaction

    Science.gov (United States)

    2014-07-01

    however, 27 of these articles had insufficient information to calculate effect size. Authors were contacted via email and were given 5 weeks to... Multitasking Personality Robot Personality Communication Mode States Team Collaboration Fatigue Capability In-group Membership Stress

  16. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  17. A meta-analysis of factors affecting trust in human-robot interaction.

    Science.gov (United States)

    Hancock, Peter A; Billings, Deborah R; Schaefer, Kristin E; Chen, Jessie Y C; de Visser, Ewart J; Parasuraman, Raja

    2011-10-01

    We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.

  18. A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033

    Science.gov (United States)

    Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.

    2012-01-01

    The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars

  19. You Can Leave Your Head On: Attention Management and Turn-Taking in Multi-party Interaction with a Virtual Human/Robot Duo

    NARCIS (Netherlands)

    Linssen, Jeroen; Berkhoff, Meike; Bode, Max; Rens, Eduard; Theune, Mariet; Wiltenburg, Daan; Beskow, Jonas; Peters, Christopher; Castellano, Ginevra; O'Sullivan, Carol; Leite, Iolanda; Kopp, Stefan

    In two small studies, we investigated how a virtual human/ robot duo can complement each other in joint interaction with one or more users. The robot takes care of turn management while the virtual human draws attention to the robot. Our results show that having the virtual human address the robot,

  20. Collaboration

    Science.gov (United States)

    King, Michelle L.

    2010-01-01

    This article explores collaboration between library media educators and regular classroom teachers. The article focuses on the context of the issue, positions on the issue, the impact of collaboration, and how to implement effective collaboration into the school system. Various books and professional journals are used to support conclusions…

  1. Surface Support Systems for Co-Operative and Integrated Human/Robotic Lunar Exploration

    Science.gov (United States)

    Mueller, Robert P.

    2006-01-01

    Human and robotic partnerships to realize space goals can enhance space missions and provide increases in human productivity while decreasing the hazards that the humans are exposed to. For lunar exploration, the harsh environment of the moon and the repetitive nature of the tasks involved with lunar outpost construction, maintenance and operation as well as production tasks associated with in-situ resource utilization, make it highly desirable to use robotic systems in co-operation with human activity. A human lunar outpost is functionally examined and concepts for selected human/robotic tasks are discussed in the context of a lunar outpost which will enable the presence of humans on the moon for extended periods of time.

  2. Improving collaborative play between children with autism spectrum disorders and their siblings : the effectiveness of a robot-mediated intervention based on lego (R) therapy

    NARCIS (Netherlands)

    Huskens, Bibi; Palmen, Annemiek; Van der Werff, Marije; Lourens, Tino; Barakova, Emilia

    2015-01-01

    The aim of the study was to investigate the effectiveness of a brief robot-mediated intervention based on Lego(A (R)) therapy on improving collaborative behaviors (i.e., interaction initiations, responses, and play together) between children with ASD and their siblings during play sessions, in a

  3. Improving Collaborative Play between Children with Autism Spectrum Disorders and Their Siblings: The Effectiveness of a Robot-Mediated Intervention Based on Lego® Therapy

    Science.gov (United States)

    Huskens, Bibi; Palmen, Annemiek; Van der Werff, Marije; Lourens, Tino; Barakova, Emilia

    2015-01-01

    The aim of the study was to investigate the effectiveness of a brief robot-mediated intervention based on Lego® therapy on improving collaborative behaviors (i.e., interaction initiations, responses, and play together) between children with ASD and their siblings during play sessions, in a therapeutic setting. A concurrent multiple baseline design…

  4. Comparability of Conflict Opportunities in Human-to-Human and Human-to-Agent Online Collaborative Problem Solving

    Science.gov (United States)

    Rosen, Yigal

    2014-01-01

    Students' performance in human-to-human and human-to-agent collaborative problem solving assessment task is investigated in this paper. A secondary data analysis of the research reported by Rosen and Tager (2013) was conducted in order to investigate the comparability of the opportunities for conflict situations in human-to-human and…

  5. Are You Talking to Me? Dialogue Systems Supporting Mixed Teams of Humans and Robots

    Science.gov (United States)

    Dowding, John; Clancey, William J.; Graham, Jeffrey

    2006-01-01

    This position paper describes an approach to building spoken dialogue systems for environments containing multiple human speakers and hearers, and multiple robotic speakers and hearers. We address the issue, for robotic hearers, of whether the speech they hear is intended for them, or more likely to be intended for some other hearer. We will describe data collected during a series of experiments involving teams of multiple human and robots (and other software participants), and some preliminary results for distinguishing robot-directed speech from human-directed speech. The domain of these experiments is Mars-analogue planetary exploration. These Mars-analogue field studies involve two subjects in simulated planetary space suits doing geological exploration with the help of 1-2 robots, supporting software agents, a habitat communicator and links to a remote science team. The two subjects are performing a task (geological exploration) which requires them to speak with each other while also speaking with their assistants. The technique used here is to use a probabilistic context-free grammar language model in the speech recognizer that is trained on prior robot-directed speech. Intuitively, the recognizer will give higher confidence to an utterance if it is similar to utterances that have been directed to the robot in the past.

  6. Bio-mechanical Analysis of Human Joints and Extension of the Study to Robot

    OpenAIRE

    S. Parasuraman; Ler Shiaw Pei

    2008-01-01

    In this paper, the bio-mechanical analysis of human joints is carried out and the study is extended to the robot manipulator. This study will first focus on the kinematics of human arm which include the movement of each joint in shoulder, wrist, elbow and finger complexes. Those analyses are then extended to the design of a human robot manipulator. A simulator is built for Direct Kinematics and Inverse Kinematics of human arm. In the simulation of Direct Kinematics, the human joint angles can...

  7. Teaching Human Poses Interactively to a Social Robot

    Science.gov (United States)

    Gonzalez-Pacheco, Victor; Malfaz, Maria; Fernandez, Fernando; Salichs, Miguel A.

    2013-01-01

    The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher's explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics. PMID:24048336

  8. Teaching Human Poses Interactively to a Social Robot

    Directory of Open Access Journals (Sweden)

    Miguel A. Salichs

    2013-09-01

    Full Text Available The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher’s explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics.

  9. Continuing Robot Skill Learning after Demonstration with Human Feedback

    Directory of Open Access Journals (Sweden)

    Argall Brenna D.

    2011-12-01

    Full Text Available Though demonstration-based approaches have been successfully applied to learning a variety of robot behaviors, there do exist some limitations. The ability to continue learning after demonstration, based on execution experience with the learned policy, therefore has proven to be an asset to many demonstration-based learning systems. This paper discusses important considerations for interfaces that provide feedback to adapt and improve demonstrated behaviors. Feedback interfaces developed for two robots with very different motion capabilities - a wheeled mobile robot and high degree-of-freedom humanoid - are highlighted.

  10. Human Hand Motion Analysis and Synthesis of Optimal Power Grasps for a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Francesca Cordella

    2014-03-01

    Full Text Available Biologically inspired robotic systems can find important applications in biomedical robotics, since studying and replicating human behaviour can provide new insights into motor recovery, functional substitution and human-robot interaction. The analysis of human hand motion is essential for collecting information about human hand movements useful for generalizing reaching and grasping actions on a robotic system. This paper focuses on the definition and extraction of quantitative indicators for describing optimal hand grasping postures and replicating them on an anthropomorphic robotic hand. A motion analysis has been carried out on six healthy human subjects performing a transverse volar grasp. The extracted indicators point to invariant grasping behaviours between the involved subjects, thus providing some constraints for identifying the optimal grasping configuration. Hence, an optimization algorithm based on the Nelder-Mead simplex method has been developed for determining the optimal grasp configuration of a robotic hand, grounded on the aforementioned constraints. It is characterized by a reduced computational cost. The grasp stability has been tested by introducing a quality index that satisfies the form-closure property. The grasping strategy has been validated by means of simulation tests and experimental trials on an arm-hand robotic system. The obtained results have shown the effectiveness of the extracted indicators to reduce the non-linear optimization problem complexity and lead to the synthesis of a grasping posture able to replicate the human behaviour while ensuring grasp stability. The experimental results have also highlighted the limitations of the adopted robotic platform (mainly due to the mechanical structure to achieve the optimal grasp configuration.

  11. Collaborative human-machine nuclear non-proliferation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Greitzer, F.L.; Badalamente, R.V.; Stewart, T.S.

    1993-10-01

    The purpose of this paper is to report on the results of a project investigating support concepts for the information treatment needs of the International Atomic Energy Agency (IAEA, also referred to as the Agency) and its attempts to strengthen international safeguards. The aim of the research was to define user/computer interface concepts and intelligent support features that will enhance the analyst`s access to voluminous and diverse information, the ability to recognize and evaluate uncertain data, and the capability to make decisions and recommendations. The objective was to explore techniques for enhancing safeguards analysis through application of (1) more effective user-computer interface designs and (2) advanced concepts involving human/system collaboration. The approach was to identify opportunities for human/system collaboration that would capitalize on human strengths and still accommodate human limitations. This paper documents the findings and describes a concept prototype, Proliferation Analysis Support System (PASS), developed for demonstration purposes. The research complements current and future efforts to enhance the information systems used by the IAEA, but has application elsewhere, as well.

  12. Affective and behavioral responses to robot-initiated social touch : Towards understanding the opportunities and limitations of physical contact in human-robot interaction

    NARCIS (Netherlands)

    Willemse, C.J.A.M.; Toet, A.; Erp, J.B.F. van

    2017-01-01

    Social touch forms an important aspect of the human non-verbal communication repertoire, but is often overlooked in human–robot interaction. In this study, we investigated whether robot-initiated touches can induce physiological, emotional, and behavioral responses similar to those reported for

  13. Smooth leader or sharp follower? Playing the mirror game with a robot.

    Science.gov (United States)

    Kashi, Shir; Levy-Tzedek, Shelly

    2018-01-01

    The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. We set out to test people's preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions.

  14. A collaborative brain-computer interface for improving human performance.

    Directory of Open Access Journals (Sweden)

    Yijun Wang

    Full Text Available Electroencephalogram (EEG based brain-computer interfaces (BCI have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1 Event-related potentials (ERP averaging, (2 Feature concatenating, and (3 Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100-250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC, which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior.

  15. Adaptive training algorithm for robot-assisted upper-arm rehabilitation, applicable to individualised and therapeutic human-robot interaction.

    Science.gov (United States)

    Chemuturi, Radhika; Amirabdollahian, Farshid; Dautenhahn, Kerstin

    2013-09-28

    Rehabilitation robotics is progressing towards developing robots that can be used as advanced tools to augment the role of a therapist. These robots are capable of not only offering more frequent and more accessible therapies but also providing new insights into treatment effectiveness based on their ability to measure interaction parameters. A requirement for having more advanced therapies is to identify how robots can 'adapt' to each individual's needs at different stages of recovery. Hence, our research focused on developing an adaptive interface for the GENTLE/A rehabilitation system. The interface was based on a lead-lag performance model utilising the interaction between the human and the robot. The goal of the present study was to test the adaptability of the GENTLE/A system to the performance of the user. Point-to-point movements were executed using the HapticMaster (HM) robotic arm, the main component of the GENTLE/A rehabilitation system. The points were displayed as balls on the screen and some of the points also had a real object, providing a test-bed for the human-robot interaction (HRI) experiment. The HM was operated in various modes to test the adaptability of the GENTLE/A system based on the leading/lagging performance of the user. Thirty-two healthy participants took part in the experiment comprising of a training phase followed by the actual-performance phase. The leading or lagging role of the participant could be used successfully to adjust the duration required by that participant to execute point-to-point movements, in various modes of robot operation and under various conditions. The adaptability of the GENTLE/A system was clearly evident from the durations recorded. The regression results showed that the participants required lower execution times with the help from a real object when compared to just a virtual object. The 'reaching away' movements were longer to execute when compared to the 'returning towards' movements irrespective of the

  16. When in Rome: the role of culture & context in adherence to robot recommendations

    NARCIS (Netherlands)

    Wang, L.; Rau, P.-L.P.; Evers, V.; Robinson, B.K.; Hinds, P.

    2010-01-01

    In this study, we sought to clarify the effects of users' cultural background and cultural context on human-robot team collaboration by investigating attitudes toward and the extent to which people changed their decisions based on the recommendations of a robot collaborator. We report the results of

  17. A Standard of Visualization Abstraction for Human-Robot Interfaces

    Data.gov (United States)

    National Aeronautics and Space Administration — The research objective is to create map-based displays that support robot operators working across a variety of paradigms. It has been shown that using an algorithm...

  18. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures.

    Directory of Open Access Journals (Sweden)

    Thierry Chaminade

    2010-07-01

    Full Text Available The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents.Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted.Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance.Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions.Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.

  19. Inducing self-selected human engagement in robotic locomotion training.

    Science.gov (United States)

    Collins, Steven H; Jackson, Rachel W

    2013-06-01

    Stroke leads to severe mobility impairments for millions of individuals each year. Functional outcomes can be improved through manual treadmill therapy, but high costs limit patient exposure and, thereby, outcomes. Robotic gait training could increase the viable duration and frequency of training sessions, but robotic approaches employed thus far have been less effective than manual therapy. These shortcomings may relate to subconscious energy-minimizing drives, which might cause patients to engage less actively in therapy when provided with corrective robotic assistance. We have devised a new method for gait rehabilitation that harnesses, rather than fights, least-effort tendencies. Therapeutic goals, such as increased use of the paretic limb, are made easier than the patient's nominal gait through selective assistance from a robotic platform. We performed a pilot test on a healthy subject (N = 1) in which altered self-selected stride length was induced using a tethered robotic ankle-foot orthosis. The subject first walked on a treadmill while wearing the orthosis with and without assistance at unaltered and voluntarily altered stride length. Voluntarily increasing stride length by 5% increased metabolic energy cost by 4%. Robotic assistance decreased energy cost at both unaltered and voluntarily increased stride lengths, by 6% and 8% respectively. We then performed a test in which the robotic system continually monitored stride length and provided more assistance if the subject's stride length approached a target increase. This adaptive assistance protocol caused the subject to slowly adjust their gait patterns towards the target, leading to a 4% increase in stride length. Metabolic energy consumption was simultaneously reduced by 5%. These results suggest that selective-assistance protocols based on targets relevant to rehabilitation might lead patients to self-select desirable gait patterns during robotic gait training sessions, possibly facilitating better

  20. Human Brain inspired Artificial Intelligence & Developmental Robotics: A Review

    Directory of Open Access Journals (Sweden)

    Suresh Kumar

    2017-06-01

    Full Text Available Along with the developments in the field of the robotics, fascinating contributions and developments can be seen in the field of Artificial intelligence (AI. In this paper we will discuss about the developments is the field of artificial intelligence focusing learning algorithms inspired from the field of Biology, particularly large scale brain simulations, and developmental Psychology. We will focus on the emergence of the Developmental robotics and its significance in the field of AI.

  1. Natural Tasking of Robots Based on Human Interaction Cues

    Science.gov (United States)

    2005-06-01

    MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching

  2. Classifying a Person's Degree of Accessibility From Natural Body Language During Social Human-Robot Interactions.

    Science.gov (United States)

    McColl, Derek; Jiang, Chuan; Nejat, Goldie

    2017-02-01

    For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.

  3. Validation of a robotic balance system for investigations in the control of human standing balance.

    Science.gov (United States)

    Luu, Billy L; Huryn, Thomas P; Van der Loos, H F Machiel; Croft, Elizabeth A; Blouin, Jean-Sébastien

    2011-08-01

    Previous studies have shown that human body sway during standing approximates the mechanics of an inverted pendulum pivoted at the ankle joints. In this study, a robotic balance system incorporating a Stewart platform base was developed to provide a new technique to investigate the neural mechanisms involved in standing balance. The robotic system, programmed with the mechanics of an inverted pendulum, controlled the motion of the body in response to a change in applied ankle torque. The ability of the robotic system to replicate the load properties of standing was validated by comparing the load stiffness generated when subjects balanced their own body to the robot's mechanical load programmed with a low (concentrated-mass model) or high (distributed-mass model) inertia. The results show that static load stiffness was not significantly (p > 0.05) different for standing and the robotic system. Dynamic load stiffness for the robotic system increased with the frequency of sway, as predicted by the mechanics of an inverted pendulum, with the higher inertia being accurately matched to the load properties of the human body. This robotic balance system accurately replicated the physical model of standing and represents a useful tool to simulate the dynamics of a standing person. © 2011 IEEE

  4. Acceptance and Attitudes Toward a Human-like Socially Assistive Robot by Older Adults.

    Science.gov (United States)

    Louie, Wing-Yue Geoffrey; McColl, Derek; Nejat, Goldie

    2014-01-01

    Recent studies have shown that cognitive and social interventions are crucial to the overall health of older adults including their psychological, cognitive, and physical well-being. However, due to the rapidly growing elderly population of the world, the resources and people to provide these interventions is lacking. Our work focuses on the use of social robotic technologies to provide person-centered cognitive interventions. In this article, we investigate the acceptance and attitudes of older adults toward the human-like expressive socially assistive robot Brian 2.1 in order to determine if the robot's human-like assistive and social characteristics would promote the use of the robot as a cognitive and social interaction tool to aid with activities of daily living. The results of a robot acceptance questionnaire administered during a robot demonstration session with a group of 46 elderly adults showed that the majority of the individuals had positive attitudes toward the socially assistive robot and its intended applications.

  5. Analytical basis for evaluating the effect of unplanned interventions on the effectiveness of a human-robot system

    International Nuclear Information System (INIS)

    Shah, Julie A.; Saleh, Joseph H.; Hoffman, Jeffrey A.

    2008-01-01

    Increasing prevalence of human-robot systems in a variety of applications raises the question of how to design these systems to best leverage the capabilities of humans and robots. In this paper, we address the relationships between reliability, productivity, and risk to humans from human-robot systems operating in a hostile environment. Objectives for maximizing the effectiveness of a human-robot system are presented, which capture these coupled relationships, and reliability parameters are proposed to characterize unplanned interventions between a human and robot. The reliability metrics defined here take on an expanded meaning in which the underlying concept of failure in traditional reliability analysis is replaced by the notion of intervention. In the context of human-robotic systems, an intervention is not only driven by component failures, but includes many other factors that can make a robotic agent to request or a human agent to provide intervention, as we argue in this paper. The effect of unplanned interventions on the effectiveness of human-robot systems is then investigated analytically using traditional reliability analysis. Finally, we discuss the implications of these analytical trends on the design and evaluation of human-robot systems

  6. An Experimental Study of Embodied Interaction and Human Perception of Social Presence for Interactive Robots in Public Settings

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Heath, Damith; Vlachos, Evgenios

    2018-01-01

    The human perception of cognitive robots as social depends on many factors, including those that do not necessarily pertain to a robot’s cognitive functioning. Experience Design offers a useful framework for evaluating when participants interact with robots as products or tools and when they regard...... them as social actors. This study describes a between-participants experiment conducted at a science museum, where visitors were invited to play a game of noughts and crosses with a Baxter robot. The goal is to foster meaningful interactions that promote engagement between the human and robot...... in a museum context. Using an Experience Design framework, we tested the robot in three different conditions to better understand which factors contribute to the perception of robots as social. The experiment also outlines best practices for conducting human-robot interaction research in museum exhibitions...

  7. Understanding Older Adult's Perceptions of Factors that Support Trust in Human and Robot Care Providers.

    Science.gov (United States)

    Stuck, Rachel E; Rogers, Wendy A

    2017-06-01

    As the population of older adults increase so will the need for care providers, both human and robot. Trust is a key aspect to establish and maintain a successful older adult-care provider relationship. However, due to trust volatility it is essential to understand it within specific contexts. This proposed mixed methods study will explore what dimensions of trust emerge as important within the human-human and human-robot dyads in older adults and care providers. First, this study will help identify key qualities that support trust in a care provider relationship. By understanding what older adults perceive as needing to trust humans and robots for various care tasks, we can begin to provide recommendations based on user expectations for design to support trust.

  8. Stimulating collaboration between human and veterinary health care professionals.

    Science.gov (United States)

    Eussen, Björn G M; Schaveling, Jaap; Dragt, Maria J; Blomme, Robert Jan

    2017-06-13

    Despite the need to control outbreaks of (emerging) zoonotic diseases and the need for added value in comparative/translational medicine, jointly addressed in the One Health approach [One health Initiative (n.d.a). About the One Health Initiative. http://www.onehealthinitiative.com/about.php . Accessed 13 September 2016], collaboration between human and veterinary health care professionals is limited. This study focuses on the social dilemma experienced by health care professionals and ways in which an interdisciplinary approach could be developed. Based on Gaertner and Dovidio's Common Ingroup Identity Model, a number of questionnaires were designed and tested; with PROGRESS, the relation between collaboration and common goal was assessed, mediated by decategorization, recategorization, mutual differentiation and knowledge sharing. This study confirms the Common Ingroup Identity Model stating that common goals stimulate collaboration. Decategorization and mutual differentiation proved to be significant in this relationship; recategorization and knowledge sharing mediate this relation. It can be concluded that the Common Ingroup Identity Model theory helps us to understand how health care professionals perceive the One Health initiative and how they can intervene in this process. In the One Health approach, professional associations could adopt a facilitating role.

  9. Human-Agent Teaming for Multi-Robot Control: A Literature Review

    Science.gov (United States)

    2013-02-01

    advent of the Goggle driverless car , autonomous farm equipment, and unmanned commercial aircraft (Mosher, 2012). The inexorable trend towards...because a robot cannot be automated to navigate in difficult terrain. However, this high ratio will not be sustainable if large numbers of autonomous ...Parasuraman et al., 2007). 3.5 RoboLeader Past research indicates that autonomous cooperation between robots can improve the performance of the human

  10. A Multimodal Emotion Detection System during Human-Robot Interaction

    Science.gov (United States)

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  11. Three-dimensional computer-aided human factors engineering analysis of a grafting robot.

    Science.gov (United States)

    Chiu, Y C; Chen, S; Wu, G J; Lin, Y H

    2012-07-01

    The objective of this research was to conduct a human factors engineering analysis of a grafting robot design using computer-aided 3D simulation technology. A prototype tubing-type grafting robot for fruits and vegetables was the subject of a series of case studies. To facilitate the incorporation of human models into the operating environment of the grafting robot, I-DEAS graphic software was applied to establish individual models of the grafting robot in line with Jack ergonomic analysis. Six human models (95th percentile, 50th percentile, and 5th percentile by height for both males and females) were employed to simulate the operating conditions and working postures in a real operating environment. The lower back and upper limb stresses of the operators were analyzed using the lower back analysis (LBA) and rapid upper limb assessment (RULA) functions in Jack. The experimental results showed that if a leg space is introduced under the robot, the operator can sit closer to the robot, which reduces the operator's level of lower back and upper limbs stress. The proper environmental layout for Taiwanese operators for minimum levels of lower back and upper limb stress are to set the grafting operation at 23.2 cm away from the operator at a height of 85 cm and with 45 cm between the rootstock and scion units.

  12. Turn-Taking Based on Information Flow for Fluent Human-Robot Interaction

    OpenAIRE

    Thomaz, Andrea L.; Chao, Crystal

    2011-01-01

    Turn-taking is a fundamental part of human communication. Our goal is to devise a turn-taking framework for human-robot interaction that, like the human skill, represents something fundamental about interaction, generic to context or domain. We propose a model of turn-taking, and conduct an experiment with human subjects to inform this model. Our findings from this study suggest that information flow is an integral part of human floor-passing behavior. Following this, we implement autonomous ...

  13. CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Arendt, Dustin L.; Komurlu, Caner; Blaha, Leslie M.

    2017-07-14

    We developed CHISSL, a human-machine interface that utilizes supervised machine learning in an unsupervised context to help the user group unlabeled instances by her own mental model. The user primarily interacts via correction (moving a misplaced instance into its correct group) or confirmation (accepting that an instance is placed in its correct group). Concurrent with the user's interactions, CHISSL trains a classification model guided by the user's grouping of the data. It then predicts the group of unlabeled instances and arranges some of these alongside the instances manually organized by the user. We hypothesize that this mode of human and machine collaboration is more effective than Active Learning, wherein the machine decides for itself which instances should be labeled by the user. We found supporting evidence for this hypothesis in a pilot study where we applied CHISSL to organize a collection of handwritten digits.

  14. Collaborative human-machine analysis using a controlled natural language

    Science.gov (United States)

    Mott, David H.; Shemanski, Donald R.; Giammanco, Cheryl; Braines, Dave

    2015-05-01

    A key aspect of an analyst's task in providing relevant information from data is the reasoning about the implications of that data, in order to build a picture of the real world situation. This requires human cognition, based upon domain knowledge about individuals, events and environmental conditions. For a computer system to collaborate with an analyst, it must be capable of following a similar reasoning process to that of the analyst. We describe ITA Controlled English (CE), a subset of English to represent analyst's domain knowledge and reasoning, in a form that it is understandable by both analyst and machine. CE can be used to express domain rules, background data, assumptions and inferred conclusions, thus supporting human-machine interaction. A CE reasoning and modeling system can perform inferences from the data and provide the user with conclusions together with their rationale. We present a logical problem called the "Analysis Game", used for training analysts, which presents "analytic pitfalls" inherent in many problems. We explore an iterative approach to its representation in CE, where a person can develop an understanding of the problem solution by incremental construction of relevant concepts and rules. We discuss how such interactions might occur, and propose that such techniques could lead to better collaborative tools to assist the analyst and avoid the "pitfalls".

  15. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Michael Jae-Yoon Chung

    Full Text Available A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i learn probabilistic models of actions through self-discovery and experience, (ii utilize these learned models for inferring the goals of human actions, and (iii perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i a simulated robot that learns human-like gaze following behavior, and (ii a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  16. Human Factors Principles in Design of Computer-Mediated Visualization for Robot Missions

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; David J Bruemmer

    2008-12-01

    With increased use of robots as a resource in missions supporting countermine, improvised explosive devices (IEDs), and chemical, biological, radiological nuclear and conventional explosives (CBRNE), fully understanding the best means by which to complement the human operator’s underlying perceptual and cognitive processes could not be more important. Consistent with control and display integration practices in many other high technology computer-supported applications, current robotic design practices rely highly upon static guidelines and design heuristics that reflect the expertise and experience of the individual designer. In order to use what we know about human factors (HF) to drive human robot interaction (HRI) design, this paper reviews underlying human perception and cognition principles and shows how they were applied to a threat detection domain.

  17. Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)

    Science.gov (United States)

    Baroff, Lynn; Dischinger, Charlie; Fitts, David

    2009-01-01

    Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.

  18. Fuzzy variable impedance control based on stiffness identification for human-robot cooperation

    Science.gov (United States)

    Mao, Dachao; Yang, Wenlong; Du, Zhijiang

    2017-06-01

    This paper presents a dynamic fuzzy variable impedance control algorithm for human-robot cooperation. In order to estimate the intention of human for co-manipulation, a fuzzy inference system is set up to adjust the impedance parameter. Aiming at regulating the output fuzzy universe based on the human arm’s stiffness, an online stiffness identification method is developed. A drag interaction task is conducted on a 5-DOF robot with variable impedance control. Experimental results demonstrate that the proposed algorithm is superior.

  19. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.

    Science.gov (United States)

    Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro

    2018-01-01

    In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  20. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

    Directory of Open Access Journals (Sweden)

    Yoshinobu Hagiwara

    2018-03-01

    Full Text Available In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA. Object recognition results using convolutional neural network (CNN, hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL, and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  1. Robonaut: a robot designed to work with humans in space

    Science.gov (United States)

    Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby

    2003-01-01

    The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.

  2. Physiological and subjective evaluation of a human-robot object hand-over task.

    Science.gov (United States)

    Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël

    2011-11-01

    In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.

  3. In our own image? Emotional and neural processing differences when observing human-human vs human-robot interactions.

    Science.gov (United States)

    Wang, Yin; Quadflieg, Susanne

    2015-11-01

    Notwithstanding the significant role that human-robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human-human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal-parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. © The Author (2015). Published by Oxford University Press.

  4. Robotic Platform for Automated Search and Rescue Missions of Humans

    Directory of Open Access Journals (Sweden)

    Eli Kolberg

    2013-02-01

    Full Text Available We present a novel type of model incorporating a special remote life signals sensing optical system on top of a controllable robotic platform. The remote sensing system consists of a laser and a camera. By properly adapting our optics and by applying a proper image processing algorithm we can sense within the field of view, illuminated by the laser and imaged by the camera, the heartbeats and the blood pulse pressure of subjects (even several simultaneously. The task is to use the developed robotic system for search and rescue mission such as saving survivals from a fire.

  5. Educational Robotics as Mindtools

    Science.gov (United States)

    Mikropoulos, Tassos A.; Bellou, Ioanna

    2013-01-01

    Although there are many studies on the constructionist use of educational robotics, they have certain limitations. Some of them refer to robotics education, rather than educational robotics. Others follow a constructionist approach, but give emphasis only to design skills, creativity and collaboration. Some studies use robotics as an educational…

  6. Research the Gait Characteristics of Human Walking Based on a Robot Model and Experiment

    Science.gov (United States)

    He, H. J.; Zhang, D. N.; Yin, Z. W.; Shi, J. H.

    2017-02-01

    In order to research the gait characteristics of human walking in different walking ways, a robot model with a single degree of freedom is put up in this paper. The system control models of the robot are established through Matlab/Simulink toolbox. The gait characteristics of straight, uphill, turning, up the stairs, down the stairs up and down areanalyzed by the system control models. To verify the correctness of the theoretical analysis, an experiment was carried out. The comparison between theoretical results and experimental results shows that theoretical results are better agreement with the experimental ones. Analyze the reasons leading to amplitude error and phase error and give the improved methods. The robot model and experimental ways can provide foundation to further research the various gait characteristics of the exoskeleton robot.

  7. Using Human Gestures and Generic Skills to Instruct a Mobile Robot Arm in a Feeder Filling Scenario

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Rath; Høilund, Carsten; Krüger, Volker

    2012-01-01

    Mobile robots that have the ability to cooperate with humans are able to provide new possibilities to manufac- turing industries. In this paper, we discuss our mobile robot arm that a) can provide assistance at different locations in a factory and b) that can be programmed using complex human...... actions such as pointing in Take this object. In this paper, we discuss the use of the mobile robot for a feeding scenario where a human operator specifies the parts and the feeders through pointing gestures. The system is partially built using generic robotic skills. Through extensive experiments, we...

  8. I Show You How I Like You: Emotional Human-Robot Interaction through Facial Expression and Tactile Stimulation

    DEFF Research Database (Denmark)

    Canamero, Dolores; Fredslund, Jacob

    2001-01-01

    We report work on a LEGO robot that displays different emotional expressions in response to physical stimulation, for the purpose of social interaction with humans. This is a first step toward our longer-term goal of exploring believable emotional exchanges to achieve plausible interaction...... with a simple robot. Drawing inspiration from theories of human basic emotions, we implemented several prototypical expressions in the robot's caricatured face and conducted experiments to assess the recognizability of these expressions...

  9. Human Robotic Swarm Interaction Using an Artificial Physics Approach

    Science.gov (United States)

    2014-12-01

    Sarah has done an amazing job at being a mom and a spouse to a sometimes eccentric naval officer. Most importantly, I’d like to thank Jesus Christ...physicometics-based frame- work in a four-robot auditory scene monitoring scenario [19]. In his experiments, Apker et al. uses the four- wheeled Pioneer3-AT ground

  10. Mission Activity Planning for Humans and Robots on the Moon

    Science.gov (United States)

    Weisbin, C.; Shelton, K.; Lincoln, W.; Elfes, A.; Smith, J.H.; Mrozinski, J.; Hua, H.; Adumitroaie, V.; Silberg, R.

    2008-01-01

    A series of studies is conducted to develop a systematic approach to optimizing, both in terms of the distribution and scheduling of tasks, scenarios in which astronauts and robots accomplish a group of activities on the Moon, given an objective function (OF) and specific resources and constraints. An automated planning tool is developed as a key element of this optimization system.

  11. Playte, a tangible interface for engaging human-robot interaction

    DEFF Research Database (Denmark)

    Christensen, David Johan; Fogh, Rune; Lund, Henrik Hautop

    2014-01-01

    This paper describes a tangible interface, Playte, designed for children animating interactive robots. The system supports physical manipulation of behaviors represented by LEGO bricks and allows the user to record and train their own new behaviors. Our objective is to explore several modes of in...

  12. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand.

    Science.gov (United States)

    Kent, Benjamin A; Engeberg, Erik D

    2014-11-07

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques.

  13. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand

    International Nuclear Information System (INIS)

    Kent, Benjamin A; Engeberg, Erik D

    2014-01-01

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques. (paper)

  14. Human Assisted Robotic Vehicle Studies - A conceptual end-to-end mission architecture

    Science.gov (United States)

    Lehner, B. A. E.; Mazzotta, D. G.; Teeney, L.; Spina, F.; Filosa, A.; Pou, A. Canals; Schlechten, J.; Campbell, S.; Soriano, P. López

    2017-11-01

    With current space exploration roadmaps indicating the Moon as a proving ground on the way to human exploration of Mars, it is clear that human-robotic partnerships will play a key role for successful future human space missions. This paper details a conceptual end-to-end architecture for an exploration mission in cis-lunar space with a focus on human-robot interactions, called Human Assisted Robotic Vehicle Studies (HARVeSt). HARVeSt will build on knowledge of plant growth in space gained from experiments on-board the ISS and test the first growth of plants on the Moon. A planned deep space habitat will be utilised as the base of operations for human-robotic elements of the mission. The mission will serve as a technology demonstrator not only for autonomous tele-operations in cis-lunar space but also for key enabling technologies for future human surface missions. The successful approach of the ISS will be built on in this mission with international cooperation. Mission assets such as a modular rover will allow for an extendable mission and to scout and prepare the area for the start of an international Moon Village.

  15. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  16. Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations

    Science.gov (United States)

    Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana

    2011-01-01

    We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.

  17. Social Intelligence in a Human-Machine Collaboration System

    Science.gov (United States)

    Nakajima, Hiroshi; Morishima, Yasunori; Yamada, Ryota; Brave, Scott; Maldonado, Heidy; Nass, Clifford; Kawaji, Shigeyasu

    In this information society of today, it is often argued that it is necessary to create a new way of human-machine interaction. In this paper, an agent with social response capabilities has been developed to achieve this goal. There are two kinds of information that is exchanged by two entities: objective and functional information (e.g., facts, requests, states of matters, etc.) and subjective information (e.g., feelings, sense of relationship, etc.). Traditional interactive systems have been designed to handle the former kind of information. In contrast, in this study social agents handling the latter type of information are presented. The current study focuses on sociality of the agent from the view point of Media Equation theory. This article discusses the definition, importance, and benefits of social intelligence as agent technology and argues that social intelligence has a potential to enhance the user's perception of the system, which in turn can lead to improvements of the system's performance. In order to implement social intelligence in the agent, a mind model has been developed to render affective expressions and personality of the agent. The mind model has been implemented in a human-machine collaborative learning system. One differentiating feature of the collaborative learning system is that it has an agent that performs as a co-learner with which the user interacts during the learning session. The mind model controls the social behaviors of the agent, thus making it possible for the user to have more social interactions with the agent. The experiment with the system suggested that a greater degree of learning was achieved when the students worked with the co-learner agent and that the co-learner agent with the mind model that expressed emotions resulted in a more positive attitude toward the system.

  18. Cognitive Human-Machine Interface Applied in Remote Support for Industrial Robot Systems

    Directory of Open Access Journals (Sweden)

    Tomasz Kosicki

    2013-10-01

    Full Text Available An attempt is currently being made to widely introduce industrial robots to Small-Medium Enterprises (SMEs. Since the enterprises usually employ too small number of robot units to afford specialized departments for robot maintenance, they must be provided with inexpensive and immediate support remotely. This paper evaluates whether the support can be provided by means of Cognitive Info-communication – communication in which human cognitive capabilities are extended irrespectively of geographical distances. The evaluations are given with an aid of experimental system that consists of local and remote rooms, which are physically separated – a six-degree-of-freedom NACHI SH133-03 industrial robot is situated in the local room, while the operator, who supervises the robot by means of audio-visual Cognitive Human-Machine Interface, is situated in the remote room. The results of simple experiments show that Cognitive Info-communication is not only efficient mean to provide the support remotely, but is probably also a powerful tool to enhance interaction with any data-rich environment that require good conceptual understanding of system's state and careful attention management. Furthermore, the paper discusses data presentation and reduction methods for data-rich environments, as well as introduces the concepts of Naturally Acquired Data and Cognitive Human-Machine Interfaces.

  19. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  20. Learning compliant manipulation through kinesthetic and tactile human-robot interaction.

    Science.gov (United States)

    Kronander, Klas; Billard, Aude

    2014-01-01

    Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective.

  1. Devising and Interdisciplinary Teaching: A Case Study in Collaboration between Theatre and Humanities Courses

    Science.gov (United States)

    Mahoney, Kristin; Brown, Rich

    2013-01-01

    We use an experimental course collaboration that occurred in the winter of 2012 as a case study for an approach to interdisciplinary collaboration between Theatre and Humanities courses, and we argue that the theatre methodology of "devising" can serve as a particularly rich locus for collaboration between Theatre students and other…

  2. Gestalt Processing in Human-Robot Interaction: A Novel Account for Autism Research

    Directory of Open Access Journals (Sweden)

    Maya Dimitrova

    2015-12-01

    Full Text Available The paper presents a novel analysis focused on showing that education is possible through robotic enhancement of the Gestalt processing in children with autism, which is not comparable to alternative educational methods such as demonstration and instruction provided solely by human tutors. The paper underlines the conceptualization of cognitive processing of holistic representations traditionally named in psychology as Gestalt structures, emerging in the process of human-robot interaction in educational settings. Two cognitive processes are proposed in the present study - bounding and unfolding - and their role in Gestalt emergence is outlined. The proposed theoretical approach explains novel findings of autistic perception and gives guidelines for design of robot-assistants to the rehabilitation process.

  3. Using Social Robots in Health Settings: Implications of Personalization on Human-Machine Communication

    Directory of Open Access Journals (Sweden)

    Lisa Tam and Rajiv Khosla

    2016-09-01

    Full Text Available In view of the shortage of healthcare workers and a growing aging population, it is worthwhile to explore the applicability of new technologies in improving the quality of healthcare and reducing its cost. However, it remains a challenge to deploy such technologies in environments where individuals have limited knowledge about how to use them. Thus, this paper explores how the social robots designed for use in health settings in Australia have sought to overcome some of the limitations through personalization. Deployed in aged care and home-based care facilities, the social robots are person-centered, emphasizing the personalization of care with human-like attributes (e.g., human appearances to engage in reciprocal communication with users. While there have been debates over the advantages and disadvantages of personalization, this paper discusses the implications of personalization on the design of the robots for enhancing engagement, empowerment and enablement in health settings.

  4. Human-Robot Teaming for Hydrologic Data Gathering at Multiple Scales

    Science.gov (United States)

    Peschel, J.; Young, S. N.

    2017-12-01

    The use of personal robot-assistive technology by researchers and practitioners for hydrologic data gathering has grown in recent years as barriers to platform capability, cost, and human-robot interaction have been overcome. One consequence to this growth is a broad availability of unmanned platforms that might or might not be suitable for a specific hydrologic investigation. Through multiple field studies, a set of recommendations has been developed to help guide novice through experienced users in choosing the appropriate unmanned platforms for a given application. This talk will present a series of hydrologic data sets gathered using a human-robot teaming approach that has leveraged unmanned aerial, ground, and surface vehicles over multiple scales. The field case studies discussed will be connected to the best practices, also provided in the presentation. This talk will be of interest to geoscience researchers and practitioners, in general, as well as those working in fields related to emerging technologies.

  5. Vitruvian Robot

    DEFF Research Database (Denmark)

    Hasse, Cathrine

    2017-01-01

    future. A real version of Ava would not last long in a human world because she is basically a solipsist, who does not really care about humans. She cannot co-create the line humans walk along. The robots created as ‘perfect women’ (sex robots) today are very far from the ideal image of Ava...

  6. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  7. Human capital gains associated with robotic assisted laparoscopic pyeloplasty in children compared to open pyeloplasty.

    Science.gov (United States)

    Behan, James W; Kim, Steve S; Dorey, Frederick; De Filippo, Roger E; Chang, Andy Y; Hardy, Brian E; Koh, Chester J

    2011-10-01

    Robotic assisted laparoscopic pyeloplasty is an emerging, minimally invasive alternative to open pyeloplasty in children for ureteropelvic junction obstruction. The procedure is associated with smaller incisions and shorter hospital stays. To our knowledge previous outcome analyses have not included human capital calculations, especially regarding loss of parental workdays. We compared perioperative factors in patients who underwent robotic assisted laparoscopic and open pyeloplasty at a single institution, especially in regard to human capital changes, in an institutional cost analysis. A total of 44 patients 2 years old or older from a single institution underwent robotic assisted (37) or open (7) pyeloplasty from 2008 to 2010. We retrospectively reviewed the charts to collect demographic and perioperative data. The human capital approach was used to calculate parental productivity losses. Patients who underwent robotic assisted laparoscopic pyeloplasty had a significantly shorter average hospital length of stay (1.6 vs 2.8 days, p human capital gains, eg decreased lost parental wages, and lower hospitalization expenses. Future comparative outcome analyses in children should include financial factors such as human capital loss, which can be especially important for families with young children. Copyright © 2011 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  8. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    Directory of Open Access Journals (Sweden)

    Thomas Mergner

    2018-05-01

    Full Text Available Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1 four basic physical disturbances (support surface (SS tilt and translation, field and contact forces may affect the balancing in any given degree of freedom (DoF. Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2 Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3 Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking

  9. Arousal regulation and affective adaptation to human responsiveness by a robot that explores and learns a novel environment.

    Science.gov (United States)

    Hiolle, Antoine; Lewis, Matthew; Cañamero, Lola

    2014-01-01

    In the context of our work in developmental robotics regarding robot-human caregiver interactions, in this paper we investigate how a "baby" robot that explores and learns novel environments can adapt its affective regulatory behavior of soliciting help from a "caregiver" to the preferences shown by the caregiver in terms of varying responsiveness. We build on two strands of previous work that assessed independently (a) the differences between two "idealized" robot profiles-a "needy" and an "independent" robot-in terms of their use of a caregiver as a means to regulate the "stress" (arousal) produced by the exploration and learning of a novel environment, and (b) the effects on the robot behaviors of two caregiving profiles varying in their responsiveness-"responsive" and "non-responsive"-to the regulatory requests of the robot. Going beyond previous work, in this paper we (a) assess the effects that the varying regulatory behavior of the two robot profiles has on the exploratory and learning patterns of the robots; (b) bring together the two strands previously investigated in isolation and take a step further by endowing the robot with the capability to adapt its regulatory behavior along the "needy" and "independent" axis as a function of the varying responsiveness of the caregiver; and (c) analyze the effects that the varying regulatory behavior has on the exploratory and learning patterns of the adaptive robot.

  10. Intelligent Interaction for Human-Friendly Service Robot in Smart House Environment

    Directory of Open Access Journals (Sweden)

    Z. Zenn Bien

    2008-01-01

    Full Text Available The smart house under consideration is a service-integrated complex system to assist older persons and/or people with disabilities. The primary goal of the system is to achieve independent living by various robotic devices and systems. Such a system is treated as a human-in-the loop system in which human- robot interaction takes place intensely and frequently. Based on our experiences of having designed and implemented a smart house environment, called Intelligent Sweet Home (ISH, we present a framework of realizing human-friendly HRI (human-robot interaction module with various effective techniques of computational intelligence. More specifically, we partition the robotic tasks of HRI module into three groups in consideration of the level of specificity, fuzziness or uncertainty of the context of the system, and present effective interaction method for each case. We first show a task planning algorithm and its architecture to deal with well-structured tasks autonomously by a simplified set of commands of the user instead of inconvenient manual operations. To provide with capability of interacting in a human-friendly way in a fuzzy context, it is proposed that the robot should make use of human bio-signals as input of the HRI module as shown in a hand gesture recognition system, called a soft remote control system. Finally we discuss a probabilistic fuzzy rule-based life-long learning system, equipped with intention reading capability by learning human behavioral patterns, which is introduced as a solution in uncertain and time-varying situations.

  11. The NASA Human Research Wiki - An Online Collaboration Tool

    Science.gov (United States)

    Barr, Yael; Rasbury, Jack; Johnson, Jordan; Barstend, Kristina; Saile, Lynn; Watkins, Sharmi

    2012-01-01

    The Exploration Medical Capability (ExMC) element is one of six elements of the Human Research Program (HRP). ExMC is charged with decreasing the risk of: "Inability to adequately recognize or treat an ill or injured crew member" for exploration-class missions In preparation for exploration-class missions, ExMC has compiled a large evidence base, previously available only to persons within the NASA community. ExMC has developed the "NASA Human Research Wiki" in an effort to make the ExMC information available to the general public and increase collaboration within and outside of NASA. The ExMC evidence base is comprised of several types of data, including: (1)Information on more than 80 medical conditions which could occur during space flight (a)Derived from several sources (b)Including data on incidence and potential outcomes, as captured in the Integrated Medical Model s (IMM) Clinical Finding Forms (CliFFs). (2)Approximately 25 gap reports (a)Identify any "gaps" in knowledge and/or technology that would need to be addressed in order to provide adequate medical support for these novel missions.

  12. Towards Human-Friendly Efficient Control of Multi-Robot Teams

    Science.gov (United States)

    Stoica, Adrian; Theodoridis, Theodoros; Barrero, David F.; Hu, Huosheng; McDonald-Maiers, Klaus

    2013-01-01

    This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context.

  13. Presentation robot Advee

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, Stanislav; Hrbáček, J.; Ripel, T.; Ondroušek, V.; Hrbáček, R.; Schreiber, P.

    2012-01-01

    Roč. 18, 5/6 (2012), s. 307-322 ISSN 1802-1484 Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot * human - robot interface * localization Subject RIV: JD - Computer Applications, Robot ics

  14. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  15. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    Science.gov (United States)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.

  16. Observation and imitation of actions performed by humans, androids, and robots: an EMG study

    Science.gov (United States)

    Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.

    2015-01-01

    Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action

  17. Natural Tasking of Robots Based on Human Interaction Cues (CD-ROM)

    National Research Council Canada - National Science Library

    Brooks, Rodney A

    2005-01-01

    ...: 1 CD-ROM; 4 3/4 in.; 207 MB. ABSTRACT: We proposed developing the perceptual and intellectual abilities of robots so that in the field, war-fighters can interact with them in the same natural ways as they do with their human cohorts...

  18. A Proactive Approach of Robotic Framework for Making Eye Contact with Humans

    Directory of Open Access Journals (Sweden)

    Mohammed Moshiul Hoque

    2014-01-01

    Full Text Available Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view.

  19. Deactivation in the Sensorimotor Area during Observation of a Human Agent Performing Robotic Actions

    Science.gov (United States)

    Shimada, Sotaro

    2010-01-01

    It is well established that several motor areas, called the mirror-neuron system (MNS), are activated when an individual observes other's actions. However, whether the MNS responds similarly to robotic actions compared with human actions is still controversial. The present study investigated whether and how the motor area activity is influenced by…

  20. Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action

    Science.gov (United States)

    Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra

    2014-01-01

    Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212

  1. When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions

    Science.gov (United States)

    Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman

    2012-01-01

    In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…

  2. Technological Dangers and the Potential of Human-Robot Interaction

    DEFF Research Database (Denmark)

    Nørskov, Marco

    2016-01-01

    The ethical debate on social robotics has become one of the cutting edge topics of our time. When it comes to both academic and non-academic debates, the methodological framework is, with few exceptions, typically and tacitly grounded in an us-versus-them perspective. It is as though we were...... of positioning with regard to HRI. It is argued that the process itself is an artifact with moral significance, and consequently tantamount to discrimination. Furthermore, influenced by Heidegger’s warnings concerning technology, this chapter explores the possibilities of HRI with respect to the accompanying...

  3. Sound over Matter: The Effects of Functional Noise, Robot Size and Approach Velocity in Human-Robot Encounters

    NARCIS (Netherlands)

    Joosse, M.P.; Lohse, M.; Evers, Vanessa

    2014-01-01

    In our previous work we introduced functional noise as a modality for robots to communicate intent [6]. In this follow-up experiment, we replicated the first study with a robot which was taller in order to find out if the same results would apply to a tall vs. a short robot. Our results show a

  4. Interdisciplinary Construction and Implementation of a Human sized Humanoid Robot by master students

    DEFF Research Database (Denmark)

    Helbo, Jan; Svendsen, Mads Sølver

    2009-01-01

    With limited funding it seemed a very good idea to encourage master students to design and construct their own human size biped robot.  Because this task is huge and very interdisciplinary different expertises were covered by students from different departments who in turn took over results from...... former students. In the last three years three student groups from respectively Department of Mechanical Engineering and Electronic Systems have been working on the project.  The robot AAU-BOT1 is designed, manufactured, assembled, instrumented and the time for walking should be possible in the near...

  5. Introduction of symbiotic human-robot-cooperation in the steel sector: an example of social innovation

    Science.gov (United States)

    Colla, Valentina; Schroeder, Antonius; Buzzelli, Andrea; Abbà, Dario; Faes, Andrea; Romaniello, Lea

    2018-05-01

    The introduction of new technologies, which can support and empower human capabilities in a number of professional tasks while possibly reducing the need for cumbersome operations and the exposure to risk and professional diseases, is nowadays perceived as a must in any industrial field, process industry included. However, despite their relevant potentials, new technologies are not always easy to introduce in the professional environment. A design procedure which takes into account the workers' acceptance, needing and capabilities as well as a continuing education and training process of the personnel who must exploit the innovation, is as fundamental as the technical reliability for the successful introduction of any new technology in a professional environment. An exemplary case is provided by symbiotic human-robot-cooperation. In the steel sector, the difficulties for the implementation of symbiotic human-robot-cooperation is bigger with respect to the manufacturing sector, due to the environmental conditions, which in some cases are not favorable to robots. On the other hand, the opportunities and potential advantages are also greater, as robots could replace human operators in repetitive, heavy tasks, by improving workers' health and safety. The present paper provides an example of the potential and opportunities of human-robot interaction and discusses how this approach can be included in a social innovation paradigm. Moreover, an example will be provided of an ongoing project funded by the Research Fund for Coal and Steel, "ROBOHARSH", which aims at implementing such approach in the steel industry, in order to develop a very sensitive task, i.e. the replacement of the refractory components of the ladle sliding gate.

  6. Observation and Imitation of Actions Performed by Humans, Androids and Robots: An EMG study

    Directory of Open Access Journals (Sweden)

    Galit eHofree

    2015-06-01

    Full Text Available Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One key question this approach enables is what aspects of similarity between the observer and the observed agent facilitate motor simulation? Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are tuned to process other biological entities. In this study, we used humanoid robots with different degrees of humanlikeness in appearance and motion along with electromyography (EMG to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion, a Robot (mechanical appearance and motion and an Android (biological appearance, mechanical motion. Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying

  7. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    Science.gov (United States)

    2015-08-14

    agents (i.e., an entity of intelligent machine(s) that automates some of the tasks regarded as mundane , dangerous, or laborious for a human being...AUTHORS 7. PERFORMING ORGANIZATION NAMES AND ADDRESSES 15. SUBJECT TERMS b. ABSTRACT 2. REPORT TYPE 17. LIMITATION OF ABSTRACT 15. NUMBER OF PAGES...5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO

  8. Collaboration between Supported Employment and Human Resource Services: Strategies for Success

    Science.gov (United States)

    Post, Michal; Campbell, Camille; Heinz, Tom; Kotsonas, Lori; Montgomery, Joyce; Storey, Keith

    2010-01-01

    The article presents the benefits of successful collaboration between supported employment agencies and human resource managers when working together to secure employment for individuals with disabilities. Two case studies are presented: one involving a successful collaboration with county human resource managers in negotiating a change in the…

  9. Effects of robotic knee exoskeleton on human energy expenditure.

    Science.gov (United States)

    Gams, Andrej; Petric, Tadej; Debevec, Tadej; Babic, Jan

    2013-06-01

    A number of studies discuss the design and control of various exoskeleton mechanisms, yet relatively few address the effect on the energy expenditure of the user. In this paper, we discuss the effect of a performance augmenting exoskeleton on the metabolic cost of an able-bodied user/pilot during periodic squatting. We investigated whether an exoskeleton device will significantly reduce the metabolic cost and what is the influence of the chosen device control strategy. By measuring oxygen consumption, minute ventilation, heart rate, blood oxygenation, and muscle EMG during 5-min squatting series, at one squat every 2 s, we show the effects of using a prototype robotic knee exoskeleton under three different noninvasive control approaches: gravity compensation approach, position-based approach, and a novel oscillator-based approach. The latter proposes a novel control that ensures synchronization of the device and the user. Statistically significant decrease in physiological responses can be observed when using the robotic knee exoskeleton under gravity compensation and oscillator-based control. On the other hand, the effects of position-based control were not significant in all parameters although all approaches significantly reduced the energy expenditure during squatting.

  10. Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness

    Science.gov (United States)

    Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.

    2009-01-01

    Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.

  11. An Interactive Human Interface Arm Robot with the Development of Food Aid

    Directory of Open Access Journals (Sweden)

    NASHWAN D. Zaki

    2012-03-01

    Full Text Available A robotic system for the disabled who needs supports at meal is proposed. A feature of this system is that the robotic aid system can communicate with the operator using the speech recognition and speech synthesis functions. Another feature is that the robotic aid system uses an image processing, and by doing this the system can recognize the environmental situations of the dishes, cups and so on. Due to this image processing function, the operator does not need to specify the position and the posture of the dishes and target objects. Furthermore, combination communication between speech and image processing will enables a friendly man-machine to communicate with each other, since speech and visual information are essential in the human communication.

  12. Speech-Based Human and Service Robot Interaction: An Application for Mexican Dysarthric People

    Directory of Open Access Journals (Sweden)

    Santiago Omar Caballero Morales

    2013-01-01

    Full Text Available Dysarthria is a motor speech disorder due to weakness or poor coordination of the speech muscles. This condition can be caused by a stroke, traumatic brain injury, or by a degenerative neurological disease. Commonly, people with this disorder also have muscular dystrophy, which restricts their use of switches or keyboards for communication or control of assistive devices (i.e., an electric wheelchair or a service robot. In this case, speech recognition is an attractive alternative for interaction and control of service robots, despite the difficulty of achieving robust recognition performance. In this paper we present a speech recognition system for human and service robot interaction for Mexican Spanish dysarthric speakers. The core of the system consisted of a Speaker Adaptive (SA recognition system trained with normal-speech. Features such as on-line control of the language model perplexity and the adding of vocabulary, contribute to high recognition performance. Others, such as assessment and text-to-speech (TTS synthesis, contribute to a more complete interaction with a service robot. Live tests were performed with two mild dysarthric speakers, achieving recognition accuracies of 90–95% for spontaneous speech and 95–100% of accomplished simulated service robot tasks.

  13. Evidence Report, Risk of Inadequate Design of Human and Automation/Robotic Integration

    Science.gov (United States)

    Zumbado, Jennifer Rochlis; Billman, Dorrit; Feary, Mike; Green, Collin

    2011-01-01

    The success of future exploration missions depends, even more than today, on effective integration of humans and technology (automation and robotics). This will not emerge by chance, but by design. Both crew and ground personnel will need to do more demanding tasks in more difficult conditions, amplifying the costs of poor design and the benefits of good design. This report has looked at the importance of good design and the risks from poor design from several perspectives: 1) If the relevant functions needed for a mission are not identified, then designs of technology and its use by humans are unlikely to be effective: critical functions will be missing and irrelevant functions will mislead or drain attention. 2) If functions are not distributed effectively among the (multiple) participating humans and automation/robotic systems, later design choices can do little to repair this: additional unnecessary coordination work may be introduced, workload may be redistributed to create problems, limited human attentional resources may be wasted, and the capabilities of both humans and technology underused. 3) If the design does not promote accurate understanding of the capabilities of the technology, the operators will not use the technology effectively: the system may be switched off in conditions where it would be effective, or used for tasks or in contexts where its effectiveness may be very limited. 4) If an ineffective interaction design is implemented and put into use, a wide range of problems can ensue. Many involve lack of transparency into the system: operators may be unable or find it very difficult to determine a) the current state and changes of state of the automation or robot, b) the current state and changes in state of the system being controlled or acted on, and c) what actions by human or by system had what effects. 5) If the human interfaces for operation and control of robotic agents are not designed to accommodate the unique points of view and

  14. Ninety-six hours to build a prototype robot showing human emotions

    CERN Multimedia

    Stefania Pandolfi

    2016-01-01

    Thirty-five Master's students in the fields of business, design and engineering participated in an intensive five-day project-based introduction to programming and advanced electronics. The goal of the initiative was to build a fully functional prototype robot able to communicate and show at least four basic human emotions.    A group of students is presenting a prototype robot showing human emotions at IdeaSquare. With no previous experience in electronics or coding, groups of students from Portugal, Italy, Norway and Estonia were introduced to the basics of sensors, integrated circuits and actuators, and after just 96 hours they presented their functioning robots at IdeaSquare on Friday, 15 January. These robots, mostly built around Arduino boards and recycled materials, were able to display different human emotions as a response to external environmental inputs. The five-day workshop, called öBot, was organised by the IdeaSquare te...

  15. Developing an Adaptive Robotic Assistant for Close-Proximity Human-Robot Interaction in Space Environments

    Data.gov (United States)

    National Aeronautics and Space Administration — As mankind continues making strides in space exploration and associated technologies, the frequency, duration, and complexity of human space exploration missions...

  16. Predicting the long-term effects of human-robot interaction: a reflection on responsibility in medical robotics.

    Science.gov (United States)

    Datteri, Edoardo

    2013-03-01

    This article addresses prospective and retrospective responsibility issues connected with medical robotics. It will be suggested that extant conceptual and legal frameworks are sufficient to address and properly settle most retrospective responsibility problems arising in connection with injuries caused by robot behaviours (which will be exemplified here by reference to harms occurred in surgical interventions supported by the Da Vinci robot, reported in the scientific literature and in the press). In addition, it will be pointed out that many prospective responsibility issues connected with medical robotics are nothing but well-known robotics engineering problems in disguise, which are routinely addressed by roboticists as part of their research and development activities: for this reason they do not raise particularly novel ethical issues. In contrast with this, it will be pointed out that novel and challenging prospective responsibility issues may emerge in connection with harmful events caused by normal robot behaviours. This point will be illustrated here in connection with the rehabilitation robot Lokomat.

  17. Human aspects, gamification, and social media in collaborative software engineering

    NARCIS (Netherlands)

    Vasilescu, B.N.

    2014-01-01

    Software engineering is inherently a collaborative venture. In open-source software (OSS) development, such collaborations almost always span geographies and cultures. Because of the decentralised and self-directed nature of OSS as well as the social diversity inherent to OSS communities, the

  18. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction

    Directory of Open Access Journals (Sweden)

    Abdulaziz Abubshait

    2017-08-01

    Full Text Available Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human–robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human–robot interaction. We examine this question by manipulating agent appearance (human vs. robot and behavior (reliable vs. random within the same paradigm and examine how congruent (human/reliable vs. robot/random versus incongruent (human/random vs. robot/reliable combinations of these triggers affect performance (i.e., gaze following and attitudes (i.e., agent ratings in human–robot interaction. The results show that both appearance and behavior affect human–robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human–robot interaction are discussed.

  19. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  20. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  1. Pragmatic Frames for Teaching and Learning in Human-Robot Interaction: Review and Challenges.

    Science.gov (United States)

    Vollmer, Anna-Lisa; Wrede, Britta; Rohlfing, Katharina J; Oudeyer, Pierre-Yves

    2016-01-01

    One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning-teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human-human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching.

  2. Tele-operated search robot for human detection using histogram of oriented objects

    Science.gov (United States)

    Cruz, Febus Reidj G.; Avendaño, Glenn O.; Manlises, Cyrel O.; Avellanosa, James Jason G.; Abina, Jyacinth Camille F.; Masaquel, Albert M.; Siapno, Michael Lance O.; Chung, Wen-Yaw

    2017-02-01

    Disasters such as typhoons, tornadoes, and earthquakes are inevitable. Aftermaths of these disasters include the missing people. Using robots with human detection capabilities to locate the missing people, can dramatically reduce the harm and risk to those who work in such circumstances. This study aims to: design and build a tele-operated robot; implement in MATLAB an algorithm for the detection of humans; and create a database of human identification based on various positions, angles, light intensity, as well as distances from which humans will be identified. Different light intensities were made by using Photoshop to simulate smoke, dust and water drops conditions. After processing the image, the system can indicate either a human is detected or not detected. Testing with bodies covered was also conducted to test the algorithm's robustness. Based on the results, the algorithm can detect humans with full body shown. For upright and lying positions, detection can happen from 8 feet to 20 feet. For sitting position, detection can happen from 2 feet to 20 feet with slight variances in results because of different lighting conditions. The distances greater than 20 feet, no humans can be processed or false negatives can occur. For bodies covered, the algorithm can detect humans in cases made under given circumstances. On three positions, humans can be detected from 0 degrees to 180 degrees under normal, with smoke, with dust, and with water droplet conditions. This study was able to design and build a tele-operated robot with MATLAB algorithm that can detect humans with an overall precision of 88.30%, from which a database was created for human identification based on various conditions, where humans will be identified.

  3. Robots in human biomechanics--a study on ankle push-off in walking.

    Science.gov (United States)

    Renjewski, Daniel; Seyfarth, André

    2012-09-01

    In biomechanics, explanatory template models are used to identify the basic mechanisms of human locomotion. However, model predictions often lack verification in a realistic environment. We present a method that uses template model mechanics as a blueprint for a bipedal robot and a corresponding computer simulation. The hypotheses derived from template model studies concerning the function of heel-off in walking are analysed and discrepancies between the template model and its real-world anchor are pointed out. Neither extending the ground clearance of the swinging leg nor an impact reduction at touch-down as an effect of heel lifting was supported by the experiments. To confirm the relevance of the experimental findings, a comparison of robot data to human walking data is discussed and we speculate on an alternative explanation of heel-off in human walking, i.e. that the push-off powers the following leg swing.

  4. Robots in human biomechanics—a study on ankle push-off in walking

    International Nuclear Information System (INIS)

    Renjewski, Daniel; Seyfarth, André

    2012-01-01

    In biomechanics, explanatory template models are used to identify the basic mechanisms of human locomotion. However, model predictions often lack verification in a realistic environment. We present a method that uses template model mechanics as a blueprint for a bipedal robot and a corresponding computer simulation. The hypotheses derived from template model studies concerning the function of heel-off in walking are analysed and discrepancies between the template model and its real-world anchor are pointed out. Neither extending the ground clearance of the swinging leg nor an impact reduction at touch-down as an effect of heel lifting was supported by the experiments. To confirm the relevance of the experimental findings, a comparison of robot data to human walking data is discussed and we speculate on an alternative explanation of heel-off in human walking, i.e. that the push-off powers the following leg swing. (paper)

  5. Advanced mechanics in robotic systems

    CERN Document Server

    Nava Rodríguez, Nestor Eduardo

    2011-01-01

    Illustrates original and ambitious mechanical designs and techniques for the development of new robot prototypes Includes numerous figures, tables and flow charts Discusses relevant applications in robotics fields such as humanoid robots, robotic hands, mobile robots, parallel manipulators and human-centred robots

  6. Estimation of Physical Human-Robot Interaction Using Cost-Effective Pneumatic Padding

    Directory of Open Access Journals (Sweden)

    André Wilkening

    2016-08-01

    Full Text Available The idea to use a cost-effective pneumatic padding for sensing of physical interaction between a user and wearable rehabilitation robots is not new, but until now there has not been any practical relevant realization. In this paper, we present a novel method to estimate physical human-robot interaction using a pneumatic padding based on artificial neural networks (ANNs. This estimation can serve as rough indicator of applied forces/torques by the user and can be applied for visual feedback about the user’s participation or as additional information for interaction controllers. Unlike common mostly very expensive 6-axis force/torque sensors (FTS, the proposed sensor system can be easily integrated in the design of physical human-robot interfaces of rehabilitation robots and adapts itself to the shape of the individual patient’s extremity by pressure changing in pneumatic chambers, in order to provide a safe physical interaction with high user’s comfort. This paper describes a concept of using ANNs for estimation of interaction forces/torques based on pressure variations of eight customized air-pad chambers. The ANNs were trained one-time offline using signals of a high precision FTS which is also used as reference sensor for experimental validation. Experiments with three different subjects confirm the functionality of the concept and the estimation algorithm.

  7. Robot Teachers

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Ess, Charles Melvin; Bhroin, Niamh Ni

    The world's first robot teacher, Saya, was introduced to a classroom in Japan in 2009. Saya, had the appearance of a young female teacher. She could express six basic emotions, take the register and shout orders like 'be quiet' (The Guardian, 2009). Since 2009, humanoid robot technologies have...... developed. It is now suggested that robot teachers may become regular features in educational settings, and may even 'take over' from human teachers in ten to fifteen years (cf. Amundsen, 2017 online; Gohd, 2017 online). Designed to look and act like a particular kind of human; robot teachers mediate human...... existence and roles, while also aiming to support education through sophisticated, automated, human-like interaction. Our paper explores the design and existential implications of ARTIE, a robot teacher at Oxford Brookes University (2017, online). Drawing on an initial empirical exploration we propose...

  8. Human-Like Behavior of Robot Arms: General Considerations and the Handwriting Task-Part I: Mathematical Description of Human-Like Motion: Distributed Positioning and Virtual Fatigue

    NARCIS (Netherlands)

    Potkonjak, V.; Tzafestas, S.; Kostic, D.; Djordjevic, G.

    2001-01-01

    This two-part paper is concerned with the analysis and achievement of human-like behavior by robot arms (manipulators). The analysis involves three issues: (i) the resolution of the inverse kinematics problem of redundant robots, (ii) the separation of the end-effector's motion into two components,

  9. Investigation of Virtual Digital Human and Robotic Device Technology Merger Complimented by Haptics and Autostereoscopic Displays, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovations conform precisely to the technology needs described in Subtopic T5.02, Robotics and Virtual Digital Human Technologies. ?Two potential areas...

  10. Control of the seven-degree-of-freedom upper limb exoskeleton for an improved human-robot interface

    Science.gov (United States)

    Kim, Hyunchul; Kim, Jungsuk

    2017-04-01

    This study analyzes a practical scheme for controlling an exoskeleton robot with seven degrees of freedom (DOFs) that supports natural movements of the human arm. A redundant upper limb exoskeleton robot with seven DOFs is mechanically coupled to the human body such that it becomes a natural extension of the body. If the exoskeleton robot follows the movement of the human body synchronously, the energy exchange between the human and the robot will be reduced significantly. In order to achieve this, the redundancy of the human arm, which is represented by the swivel angle, should be resolved using appropriate constraints and applied to the robot. In a redundant 7-DOF upper limb exoskeleton, the pseudoinverse of the Jacobian with secondary objective functions is widely used to resolve the redundancy that defines the desired joint angles. A secondary objective function requires the desired joint angles for the movement of the human arm, and the angles are estimated by maximizing the projection of the longest principle axis of the manipulability ellipsoid for the human arm onto the virtual destination toward the head region. Then, they are fed into the muscle model with a relative damping to achieve more realistic robot-arm movements. Various natural arm movements are recorded using a motion capture system, and the actual swivel-angle is compared to that estimated using the proposed swivel angle estimation algorithm. The results indicate that the proposed algorithm provides a precise reference for estimating the desired joint angle with an error less than 5°.

  11. Ghost-in-the-Machine reveals human social signals for human–robot interaction

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P.

    2015-01-01

    We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience. PMID:26582998

  12. ANALYSIS OF HUMAN INTUITION TOWARDS ARTIFICIAL INTUITION SYNTHESIS FOR ROBOTICS

    OpenAIRE

    Octavio Diaz-Hernandez; Victor J. Gonzalez-Villela

    2017-01-01

    Human intuition is an unconscious mental process aimed to solve problems without using a rational decision-making process. Meanwhile, the artificial intuition is a limited representation of human intuition, and it models intuitive ability of solving problems in order to be implemented in machines. In this work, we performed an analysis about analogies between human and artificial intuition using INPUTS, PROCESSING, and OUTPUTS. Mainly, we have focused on synthetize algorithms that improve rob...

  13. Stochastic estimation of human shoulder impedance with robots: an experimental design.

    Science.gov (United States)

    Park, Kyungbin; Chang, Pyung Hun

    2011-01-01

    Previous studies assumed the shoulder as a hinge joint during human arm impedance measurement. This is obviously a vast simplification since the shoulder is a complex of several joints with multiple degrees of freedom. In the present work, a practical methodology for more general and realistic estimation of human shoulder impedance is proposed and validated with a spring array. It includes a gravity compensation scheme, which is developed and used for the experiments with a spatial three degrees of freedom PUMA-type robot. The experimental results were accurate and reliable, and thus it has shown a strong potential of the proposed methodology in the estimation of human shoulder impedance. © 2011 IEEE

  14. Integration of robotic surgery into routine practice and impacts on communication, collaboration, and decision making: a realist process evaluation protocol

    Science.gov (United States)

    2014-01-01

    Background Robotic surgery offers many potential benefits for patients. While an increasing number of healthcare providers are purchasing surgical robots, there are reports that the technology is failing to be introduced into routine practice. Additionally, in robotic surgery, the surgeon is physically separated from the patient and the rest of the team, with the potential to negatively impact teamwork in the operating theatre. The aim of this study is to ascertain: how and under what circumstances robotic surgery is effectively introduced into routine practice; and how and under what circumstances robotic surgery impacts teamwork, communication and decision making, and subsequent patient outcomes. Methods and design We will undertake a process evaluation alongside a randomised controlled trial comparing laparoscopic and robotic surgery for the curative treatment of rectal cancer. Realist evaluation provides an overall framework for the study. The study will be in three phases. In Phase I, grey literature will be reviewed to identify stakeholders’ theories concerning how robotic surgery becomes embedded into surgical practice and its impacts. These theories will be refined and added to through interviews conducted across English hospitals that are using robotic surgery for rectal cancer resection with staff at different levels of the organisation, along with a review of documentation associated with the introduction of robotic surgery. In Phase II, a multi-site case study will be conducted across four English hospitals to test and refine the candidate theories. Data will be collected using multiple methods: the structured observation tool OTAS (Observational Teamwork Assessment for Surgery); video recordings of operations; ethnographic observation; and interviews. In Phase III, interviews will be conducted at the four case sites with staff representing a range of surgical disciplines, to assess the extent to which the results of Phase II are generalisable and to

  15. Collaborative Science: Human Sensor Networks for Real-time Natural Disaster Prediction

    Science.gov (United States)

    Halem, M.; Yesha, Y.; Aulov, O.; Martineau, J.; Brown, S.; Conte, T.; CenterHybrid Multicore Productivity Research

    2010-12-01

    processing systems used to extract the physical quantifiable data from the “human sensor network” such as natural language tools, the semantic web, image analysis techniques which can be employed to form a collaborative framework for other real time situation analysis undergoing similar natural or human caused disasters. We believe this innovative approach of extracting geophysical data from the social media sources is unprecedented in bridging geosciences with social sciences. In the near future, we plan on expanding the collaboration with researchers from University of Minnesota (U/MN) and Florida International University(FIU). Currently U/MN is working on a project of deploying aquabots (aquatic robots) in the Gulf in order to sample water properties at different depths as well as on the surface and FIU has developed a real time Terrafly processing system incorporating high resolution commercial and gov’t satellites and aircraft data.

  16. Challenges of Human-Robot Communication in Telerobotics

    Science.gov (United States)

    Bejczy, Antal K.

    1996-01-01

    Some general considerations are presented on bilateral human-telerobot control and information communication issues. Advances are reviewed related to the more conventional human-telerobot communication techniques, and some unconventional but promising communication methods are briefly discussed. Future needs and emerging application domains are briefly indicated.

  17. Evaluating Interdisciplinary Collaborative Learning and Assessment in the Creative Arts and Humanities

    Science.gov (United States)

    Miles, Melissa; Rainbird, Sarah

    2015-01-01

    This article responds to the rising emphasis placed on interdisciplinary collaborative learning and its implications for assessment in higher education. It presents findings from a research project that examined the effectiveness of an interdisciplinary collaborative student symposium as an assessment task in an art school/humanities environment.…

  18. Collaborative Educational Leadership: The Emergence of Human Interactional Sense-Making Process as a Complex System

    Science.gov (United States)

    Jäppinen, Aini-Kristiina

    2014-01-01

    The article aims at explicating the emergence of human interactional sense-making process within educational leadership as a complex system. The kind of leadership is understood as a holistic entity called collaborative leadership. There, sense-making emerges across interdependent domains, called attributes of collaborative leadership. The…

  19. How do walkers behave when crossing the way of a mobile robot that replicates human interaction rules?

    Science.gov (United States)

    Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien

    2018-02-01

    Previous studies showed the existence of implicit interaction rules shared by human walkers when crossing each other. Especially, each walker contributes to the collision avoidance task and the crossing order, as set at the beginning, is preserved along the interaction. This order determines the adaptation strategy: the first arrived increases his/her advance by slightly accelerating and changing his/her heading, whereas the second one slows down and moves in the opposite direction. In this study, we analyzed the behavior of human walkers crossing the trajectory of a mobile robot that was programmed to reproduce this human avoidance strategy. In contrast with a previous study, which showed that humans mostly prefer to give the way to a non-reactive robot, we observed similar behaviors between human-human avoidance and human-robot avoidance when the robot replicates the human interaction rules. We discuss this result in relation with the importance of controlling robots in a human-like way in order to ease their cohabitation with humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Creating Communications, Computing, and Networking Technology Development Road Maps for Future NASA Human and Robotic Missions

    Science.gov (United States)

    Bhasin, Kul; Hayden, Jeffrey L.

    2005-01-01

    For human and robotic exploration missions in the Vision for Exploration, roadmaps are needed for capability development and investments based on advanced technology developments. A roadmap development process was undertaken for the needed communications, and networking capabilities and technologies for the future human and robotics missions. The underlying processes are derived from work carried out during development of the future space communications architecture, an d NASA's Space Architect Office (SAO) defined formats and structures for accumulating data. Interrelationships were established among emerging requirements, the capability analysis and technology status, and performance data. After developing an architectural communications and networking framework structured around the assumed needs for human and robotic exploration, in the vicinity of Earth, Moon, along the path to Mars, and in the vicinity of Mars, information was gathered from expert participants. This information was used to identify the capabilities expected from the new infrastructure and the technological gaps in the way of obtaining them. We define realistic, long-term space communication architectures based on emerging needs and translate the needs into interfaces, functions, and computer processing that will be required. In developing our roadmapping process, we defined requirements for achieving end-to-end activities that will be carried out by future NASA human and robotic missions. This paper describes: 10 the architectural framework developed for analysis; 2) our approach to gathering and analyzing data from NASA, industry, and academia; 3) an outline of the technology research to be done, including milestones for technology research and demonstrations with timelines; and 4) the technology roadmaps themselves.

  1. What makes robots social? : A user’s perspective on characteristics for social human-robot interaction

    NARCIS (Netherlands)

    de Graaf, M.M.A.; Ben Allouch, Soumaya

    2015-01-01

    A common description of a social robot is for it to be capable of communicating in a humanlike manner. However, a description of what communicating in a ‘humanlike manner’ means often remains unspecified. This paper provides a set of social behaviors and certain specific features social robots

  2. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  3. A novel semi-automatic snake robot for natural orifice transluminal endoscopic surgery: preclinical tests in animal and human cadaver models (with video).

    Science.gov (United States)

    Son, Jaebum; Cho, Chang Nho; Kim, Kwang Gi; Chang, Tae Young; Jung, Hyunchul; Kim, Sung Chun; Kim, Min-Tae; Yang, Nari; Kim, Tae-Yun; Sohn, Dae Kyung

    2015-06-01

    Natural orifice transluminal endoscopic surgery (NOTES) is an emerging surgical technique. We aimed to design, create, and evaluate a new semi-automatic snake robot for NOTES. The snake robot employs the characteristics of both a manual endoscope and a multi-segment snake robot. This robot is inserted and retracted manually, like a classical endoscope, while its shape is controlled using embedded robot technology. The feasibility of a prototype robot for NOTES was evaluated in animals and human cadavers. The transverse stiffness and maneuverability of the snake robot appeared satisfactory. It could be advanced through the anus as far as the peritoneal cavity without any injury to adjacent organs. Preclinical tests showed that the device could navigate the peritoneal cavity. The snake robot has advantages of high transverse force and intuitive control. This new robot may be clinically superior to conventional tools for transanal NOTES.

  4. Language for action: Motor resonance during the processing of human and robotic voices.

    Science.gov (United States)

    Di Cesare, G; Errante, A; Marchi, M; Cuccio, V

    2017-11-01

    In this fMRI study we evaluated whether the auditory processing of action verbs pronounced by a human or a robotic voice in the imperative mood differently modulates the activation of the mirror neuron system (MNs). The study produced three results. First, the activation pattern found during listening to action verbs was very similar in both the robot and human conditions. Second, the processing of action verbs compared to abstract verbs determined the activation of the fronto-parietal circuit classically involved during the action goal understanding. Third, and most importantly, listening to action verbs compared to abstract verbs produced activation of the anterior part of the supramarginal gyrus (aSMG) regardless of the condition (human and robot) and in the absence of any object name. The supramarginal gyrus is a region considered to underpin hand-object interaction and associated to the processing of affordances. These results suggest that listening to action verbs may trigger the recruitment of motor representations characterizing affordances and action execution, coherently with the predictive nature of motor simulation that not only allows us to re-enact motor knowledge to understand others' actions but also prepares us for the actions we might need to carry out. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    Directory of Open Access Journals (Sweden)

    Hooman Samani

    2013-12-01

    Full Text Available In this paper, we have investigated the concept of “Cultural Robotics” with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. According to the importance of the embodiment of robots in the sense of presence, the influence of robots in communication culture is anticipated. The sustainability of robotics culture based on diversity for cultural communities for various acceptance modalities is explored in order to anticipate the creation of different attributes of culture between robots and humans in the future.

  6. Movement Performance of Human-Robot Cooperation Control Based on EMG-Driven Hill-Type and Proportional Models for an Ankle Power-Assist Exoskeleton Robot.

    Science.gov (United States)

    Ao, Di; Song, Rong; Gao, JinWu

    2017-08-01

    Although the merits of electromyography (EMG)-based control of powered assistive systems have been certified, the factors that affect the performance of EMG-based human-robot cooperation, which are very important, have received little attention. This study investigates whether a more physiologically appropriate model could improve the performance of human-robot cooperation control for an ankle power-assist exoskeleton robot. To achieve the goal, an EMG-driven Hill-type neuromusculoskeletal model (HNM) and a linear proportional model (LPM) were developed and calibrated through maximum isometric voluntary dorsiflexion (MIVD). The two control models could estimate the real-time ankle joint torque, and HNM is more accurate and can account for the change of the joint angle and muscle dynamics. Then, eight healthy volunteers were recruited to wear the ankle exoskeleton robot and complete a series of sinusoidal tracking tasks in the vertical plane. With the various levels of assist based on the two calibrated models, the subjects were instructed to track the target displayed on the screen as accurately as possible by performing ankle dorsiflexion and plantarflexion. Two measurements, the root mean square error (RMSE) and root mean square jerk (RMSJ), were derived from the assistant torque and kinematic signals to characterize the movement performances, whereas the amplitudes of the recorded EMG signals from the tibialis anterior (TA) and the gastrocnemius (GAS) were obtained to reflect the muscular efforts. The results demonstrated that the muscular effort and smoothness of tracking movements decreased with an increase in the assistant ratio. Compared with LPM, subjects made lower physical efforts and generated smoother movements when using HNM, which implied that a more physiologically appropriate model could enable more natural and human-like human-robot cooperation and has potential value for improvement of human-exoskeleton interaction in future applications.

  7. Enhancing Perception with Tactile Object Recognition in Adaptive Grippers for Human-Robot Interaction.

    Science.gov (United States)

    Gandarias, Juan M; Gómez-de-Gabriel, Jesús M; García-Cerezo, Alfonso J

    2018-02-26

    The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human-robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor.

  8. Humans Need Not Apply: Robotization of Kepler Planet Candidate Vetting

    Science.gov (United States)

    Coughlin, Jeffrey; Mullally, Fergal; Thompson, Susan E.; Kepler Team

    2015-01-01

    Until now, the vast majority of Kepler planet candidate vetting has been performed by a dedicated team of humans. While human expertise has been invaluable in understanding the nuances of Kepler data, human vetting is very time-consuming and can be inconsistent. Over 20,000 threshold crossing events have been produced by the latest pipeline run on all 17 quarters of Kepler mission data, and many more artificial planet transits have been injected to estimate completeness. Given these large numbers, human vetting is no longer feasible on a reasonable time-scale, and would be difficult to characterize. We have created automated vetting programs known as "robovetters" that are specifically designed to mimic the decision-making process employed by the humans. They analyze both the light curve and pixel-level data in order to produce specific reasons for identifying false positives. We present benchmark tests on the Q1-Q16 Kepler planet catalog, which was vetted by humans, and present preliminary robovetter results based on a recent transit-search of the newly reprocessed Q1-Q17 data set.

  9. Modelling Engagement in Multi-Party Conversations : Data-Driven Approaches to Understanding Human-Human Communication Patterns for Use in Human-Robot Interactions

    OpenAIRE

    Oertel, Catharine

    2016-01-01

    The aim of this thesis is to study human-human interaction in order to provide virtual agents and robots with the capability to engage into multi-party-conversations in a human-like-manner. The focus lies with the modelling of conversational dynamics and the appropriate realization of multi-modal feedback behaviour. For such an undertaking, it is important to understand how human-human communication unfolds in varying contexts and constellations over time. To this end, multi-modal human-human...

  10. Collaborative Work and the Future of Humanities Teaching

    Science.gov (United States)

    Ullyot, Michael; O'Neill, Kate E.

    2016-01-01

    This article explores the degree to which student collaborations on research and writing assignments can effectively realize learning outcomes. The assignment, in this case, encouraged students to contribute discrete parts of a research project in order to develop their complementary abilities: researching, consulting, drafting, and revising. The…

  11. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    Science.gov (United States)

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  12. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    Science.gov (United States)

    Handford, Matthew L.; Srinivasan, Manoj

    2016-02-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.

  13. Robotic Reconnaissance Missions to Small Bodies and Their Potential Contributions to Human Exploration

    Science.gov (United States)

    Abell, P. A.; Rivkin, A. S.

    2015-01-01

    Introduction: Robotic reconnaissance missions to small bodies will directly address aspects of NASA's Asteroid Initiative and will contribute to future human exploration. The NASA Asteroid Initiative is comprised of two major components: the Grand Challenge and the Asteroid Mission. The first component, the Grand Challenge, focuses on protecting Earth's population from asteroid impacts by detecting potentially hazardous objects with enough warning time to either prevent them from impacting the planet, or to implement civil defense procedures. The Asteroid Mission involves sending astronauts to study and sample a near- Earth asteroid (NEA) prior to conducting exploration missions of the Martian system, which includes Phobos and Deimos. The science and technical data obtained from robotic precursor missions that investigate the surface and interior physical characteristics of an object will help identify the pertinent physical properties that will maximize operational efficiency and reduce mission risk for both robotic assets and crew operating in close proximity to, or at the surface of, a small body. These data will help fill crucial strategic knowledge gaps (SKGs) concerning asteroid physical characteristics that are relevant for human exploration considerations at similar small body destinations. Small Body Strategic Knowledge Gaps: For the past several years NASA has been interested in identifying the key SKGs related to future human destinations. These SKGs highlight the various unknowns and/or data gaps of targets that the science and engineering communities would like to have filled in prior to committing crews to explore the Solar System. An action team from the Small Bodies Assessment Group (SBAG) was formed specifically to identify the small body SKGs under the direction of the Human Exploration and Operations Missions Directorate (HEOMD), given NASA's recent interest in NEAs and the Martian moons as potential human destinations [1]. The action team

  14. Two-Stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human-Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2-stages Hidden Markov Model. The 1st HMM is to recognize the prime command-like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  15. A Kinect-Based Gesture Recognition Approach for a Natural Human Robot Interface

    Directory of Open Access Journals (Sweden)

    Grazia Cicirelli

    2015-03-01

    Full Text Available In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN classifiers are trained to recognize the different gestures. This work deals with different challenging tasks, such as the real-time implementation of a gesture recognition system and the temporal resolution of gestures. The HRI interface developed in this work includes three Kinect cameras placed at different locations in an indoor environment and an autonomous mobile robot that can be remotely controlled by one operator standing in front of one of the Kinects. Moreover, the system is supplied with a people re-identification module which guarantees that only one person at a time has control of the robot. The system's performance is first validated offline, and then online experiments are carried out, proving the real-time operation of the system as required by a HRI interface.

  16. Human activity understanding for robot-assisted living

    NARCIS (Netherlands)

    Hu, N.

    2016-01-01

    This thesis investigated the problem of understanding human activities, at different levels of granularity and taking into account both the variability in activities and annotator disagreement. To be able to capture the large variations within each of the action classes, we propose a model that uses

  17. Toward a unified method for analysing and teaching Human Robot Interaction

    DEFF Research Database (Denmark)

    Dinesen, Jens Vilhelm

    , drawing on key theories and methods from both communications- and interaction-theory. The aim is to provide a single unified method for analysing interaction, through means of video analysis and then applying theories, with proven mutual compatibility, to reach a desired granularity of study.......This abstract aims to present key aspect of a future paper, which outlines the ongoing development ofa unified method for analysing and teaching Human-Robot-Interaction. The paper will propose a novel method for analysing both HRI, interaction with other forms of technologies and fellow humans...

  18. Designing Emotionally Expressive Robots

    DEFF Research Database (Denmark)

    Tsiourti, Christiana; Weiss, Astrid; Wac, Katarzyna

    2017-01-01

    Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid...... robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper -a highly human-like robot and Hobbit – a robot...... for robots....

  19. Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Bo Zhou

    2017-11-01

    Full Text Available In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm 2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93 . 3 % accuracy from a known group of participants, and 89 . 1 % from strangers.

  20. Robust Control of a Cable-Driven Soft Exoskeleton Joint for Intrinsic Human-Robot Interaction.

    Science.gov (United States)

    Jarrett, C; McDaid, A J

    2017-07-01

    A novel, cable-driven soft joint is presented for use in robotic rehabilitation exoskeletons to provide intrinsic, comfortable human-robot interaction. The torque-displacement characteristics of the soft elastomeric core contained within the joint are modeled. This knowledge is used in conjunction with a dynamic system model to derive a sliding mode controller (SMC) to implement low-level torque control of the joint. The SMC controller is experimentally compared with a baseline feedback-linearised proportional-derivative controller across a range of conditions and shown to be robust to un-modeled disturbances. The torque controller is then tested with six healthy subjects while they perform a selection of activities of daily living, which has validated its range of performance. Finally, a case study with a participant with spastic cerebral palsy is presented to illustrate the potential of both the joint and controller to be used in a physiotherapy setting to assist clinical populations.