WorldWideScience

Sample records for human collaborative robot

  1. Communication of Robot Status to Improve Human-Robot Collaboration

    Data.gov (United States)

    National Aeronautics and Space Administration — Future space exploration will require humans and robots to collaborate to perform all the necessary tasks. Current robots mostly operate separately from humans due...

  2. An Integrated Framework for Human-Robot Collaborative Manipulation.

    Science.gov (United States)

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  3. Human-robot collaboration for a shared mission

    OpenAIRE

    Karami , Abir-Beatrice; Jeanpierre , Laurent; Mouaddib , Abdel-Illah

    2010-01-01

    International audience; We are interested in collaboration domains between a robot and a human partner, the partners share a common mission without an explicit communication about their plans. The decision process of the robot agent should consider the presence of its human partner. Also, the robot planning should be flexible to human comfortability and all possible changes in the shared environment. To solve the problem of human-robot collaborationwith no communication, we present a model th...

  4. Learning Semantics of Gestural Instructions for Human-Robot Collaboration

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888

  5. Learning Semantics of Gestural Instructions for Human-Robot Collaboration.

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.

  6. An Augmented Discrete-Time Approach for Human-Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Peidong Liang

    2016-01-01

    Full Text Available Human-robot collaboration (HRC is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios.

  7. Trends in control and decision-making for human-robot collaboration systems

    CERN Document Server

    Zhang, Fumin

    2017-01-01

    This book provides an overview of recent research developments in the automation and control of robotic systems that collaborate with humans. A measure of human collaboration being necessary for the optimal operation of any robotic system, the contributors exploit a broad selection of such systems to demonstrate the importance of the subject, particularly where the environment is prone to uncertainty or complexity. They show how such human strengths as high-level decision-making, flexibility, and dexterity can be combined with robotic precision, and ability to perform task repetitively or in a dangerous environment. The book focuses on quantitative methods and control design for guaranteed robot performance and balanced human experience. Its contributions develop and expand upon material presented at various international conferences. They are organized into three parts covering: one-human–one-robot collaboration; one-human–multiple-robot collaboration; and human–swarm collaboration. Individual topic ar...

  8. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-03-01

    Full Text Available NASA's vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  9. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-11-01

    Full Text Available NASA?s vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  10. Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard; Bøgh, Simon; Ceballos, Iker

    Collaborative industrial robots are creating new opportunities for collaboration between humans and robots in shared workspaces. In order for such collaboration to be efficient, robots - as well as humans - need to have an understanding of the other's intentions and current ongoing action....... In this work, we propose a method for learning, classifying, and predicting actions taken by a human. Our proposed method is based on the human skeleton model from a Kinect. For demonstration of our approach we chose a typical pick-and-place scenario. Therefore, only arms and upper body are considered......; in total 15 joints. Based on trajectories on these joints, different classes of motion are separated using partitioning around medoids (PAM). Subsequently, SVM is used to train the classes to form a library of human motions. The approach allows run-time detection of when a new motion has been initiated...

  11. Essential technologies for developing human and robot collaborative system

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1997-10-01

    In this study, we aim to develop a concept of new robot system, i.e., 'human and robot collaborative system', for the patrol of nuclear power plants. This paper deals with the two essential technologies developed for the system. One is the autonomous navigation program with human intervention function which is indispensable for human and robot collaboration. The other is the position estimation method by using gyroscope and TV image to make the estimation accuracy much higher for safe navigation. Feasibility of the position estimation method is evaluated by experiment and numerical simulation. (author)

  12. Analyzing the effects of human-aware motion planning on close-proximity human-robot collaboration.

    Science.gov (United States)

    Lasota, Przemyslaw A; Shah, Julie A

    2015-02-01

    The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human-robot interaction. We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human-robot team fluency and human worker satisfaction. Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human-robot collaboration.

  13. Human-Robot Teaming: Communication, Coordination, and Collaboration

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    In this talk, I will describe how NASA Ames has been studying how human-robot teams can increase the performance, reduce the cost, and increase the success of a variety of endeavors. The central premise of our work is that humans and robots should support one another in order to compensate for limitations of automation and manual control. This principle has broad applicability to a wide range of domains, environments, and situations. At the same time, however, effective human-robot teaming requires communication, coordination, and collaboration -- all of which present significant research challenges. I will discuss some of the ways that NASA Ames is addressing these challenges and present examples of our work involving planetary rovers, free-flying robots, and self-driving cars.

  14. Designing human-robot collaborations in industry 4.0: explorative case studies

    DEFF Research Database (Denmark)

    Kadir, Bzhwen A; Broberg, Ole; Souza da Conceição, Carolina

    2018-01-01

    We are experiencing an increase in human-robot interactions and the use of collaborative robots (cobots) in industrial work systems. To make full use of cobots, it is essential to understand emerging challenges and opportunities. In this paper, we analyse three successful industrial case studies...... of cobots’ implementation. We highlight the top three challenges and opportunities, from the empirical evidence, relate them to current available literature on the topic, and use them to identify key design factor to consider when designing industrial work system with human-robot collaborations....

  15. Well done, Robot! : the importance of praise and presence in human-robot collaboration

    NARCIS (Netherlands)

    Reichenbach, J.; Bartneck, C.; Carpenter, J.; Dautenhahn, K.

    2006-01-01

    This study reports on an experiment in which participants had to collaborate with either another human or a robot (partner). The robot would either be present in the room or only be represented on the participants' computer screen (presence). Furthermore, the participants' partner would either make

  16. Human-robot collaborative navigation for autonomous maintenance management of nuclear installation

    International Nuclear Information System (INIS)

    Nugroho, Djoko Hari

    2002-01-01

    Development of human and robot collaborative navigation for autonomous maintenance management of nuclear installation has been conducted. The human-robot collaborative system is performed using a switching command between autonomous navigation and manual navigation that incorporate a human intervention. The autonomous navigation path is conducted using a novel algorithm of MLG method based on Lozano-Perez s visibility graph. The MLG optimizes the shortest distance and safe constraints. While the manual navigation is performed using manual robot tele operation tools. Experiment in the MLG autonomous navigation system is conducted for six times with 3-D starting point and destination point coordinate variation. The experiment shows a good performance of autonomous robot maneuver to avoid collision with obstacle. The switching navigation is well interpreted using open or close command to RS-232C constructed using LabVIEW

  17. Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human–Robot Collaboration

    Science.gov (United States)

    Shah, Julie A.

    2015-01-01

    Objective: The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. Background: The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction. Method: We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. Results: When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. Conclusion: People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human–robot team fluency and human worker satisfaction. Application: Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human–robot collaboration. PMID:25790568

  18. Digital twins of human robot collaboration in a production setting

    DEFF Research Database (Denmark)

    Malik, Ali Ahmad; Bilberg, Arne

    2018-01-01

    This paper aims to present a digital twin framework to support the design, build and control of human-machine cooperation. In this study, computer simulations are used to develop a digital counterpart of a human-robot collaborative work environment for assembly work. The digital counterpart remains...... updated during the life cycle of the production system by continuously mirroring the physical system for quick and safe embed for continuous improvements. The case of a manufacturing company with human-robot work teams is presented for developing and validating the digital twin framework....

  19. Semi-manual mastoidectomy assisted by human-robot collaborative control - A temporal bone replica study.

    Science.gov (United States)

    Lim, Hoon; Matsumoto, Nozomu; Cho, Byunghyun; Hong, Jaesung; Yamashita, Makoto; Hashizume, Makoto; Yi, Byung-Ju

    2016-04-01

    To develop an otological robot that can protect important organs from being injured. We developed a five degree-of-freedom robot for otological surgery. Unlike the other robots that were reported previously, our robot does not replace surgeon's procedures, but instead utilizes human-robot collaborative control. The robot basically releases all of the actuators so that the surgeon can manipulate the drill within the robot's working area with minimal restriction. When the drill reaches a forbidden area, the surgeon feels as if the drill hits a wall. When an engineer performed mastoidectomy using the robot for assistance, the facial nerve in the segmented region was always protected with a more than 2.5mm margin, which was almost the same as the pre-set safety margin of 3mm. Semi-manual drilling using human-robot collaborative control was feasible, and may hold a realistic prospect of clinical use in the near future. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Admittance Control for Robot Assisted Retinal Vein Micro-Cannulation under Human-Robot Collaborative Mode.

    Science.gov (United States)

    Zhang, He; Gonenc, Berk; Iordachita, Iulian

    2017-10-01

    Retinal vein occlusion is one of the most common retinovascular diseases. Retinal vein cannulation is a potentially effective treatment method for this condition that currently lies, however, at the limits of human capabilities. In this work, the aim is to use robotic systems and advanced instrumentation to alleviate these challenges, and assist the procedure via a human-robot collaborative mode based on our earlier work on the Steady-Hand Eye Robot and force-sensing instruments. An admittance control method is employed to stabilize the cannula relative to the vein and maintain it inside the lumen during the injection process. A pre-stress strategy is used to prevent the tip of microneedle from getting out of vein in in prolonged infusions, and the performance is verified through simulations.

  1. Admittance Control for Robot Assisted Retinal Vein Micro-Cannulation under Human-Robot Collaborative Mode

    Science.gov (United States)

    Gonenc, Berk; Iordachita, Iulian

    2017-01-01

    Retinal vein occlusion is one of the most common retinovascular diseases. Retinal vein cannulation is a potentially effective treatment method for this condition that currently lies, however, at the limits of human capabilities. In this work, the aim is to use robotic systems and advanced instrumentation to alleviate these challenges, and assist the procedure via a human-robot collaborative mode based on our earlier work on the Steady-Hand Eye Robot and force-sensing instruments. An admittance control method is employed to stabilize the cannula relative to the vein and maintain it inside the lumen during the injection process. A pre-stress strategy is used to prevent the tip of microneedle from getting out of vein in in prolonged infusions, and the performance is verified through simulations. PMID:29607442

  2. Interactive Exploration Robots: Human-Robotic Collaboration and Interactions

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.

  3. Industrial Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Philipsen, Mark Philip; Rehm, Matthias; Moeslund, Thomas B.

    2018-01-01

    In the future, robots are envisioned to work side by side with humans in dynamic environments both in production contexts but also more and more in societal context like health care, education, or commerce. This will require robots to become socially accepted, to become able to analyze human...... intentions in meaningful ways, and to become proactive. It is our conviction that this can only be achieved on the basis of a tight combination of multimodal signal processing and AI techniques in real application context....

  4. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, ...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots....

  5. Attributing Agency to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility-Loci.

    Science.gov (United States)

    Nyholm, Sven

    2017-07-18

    Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human-robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

  6. Modeling and Control of Collaborative Robot System using Haptic Feedback

    Directory of Open Access Journals (Sweden)

    Vivekananda Shanmuganatha

    2017-08-01

    Full Text Available When two robot systems can share understanding using any agreed knowledge, within the constraints of the system’s communication protocol, the approach may lead to a common improvement. This has persuaded numerous new research inquiries in human-robot collaboration. We have built up a framework prepared to do independent following and performing table-best protest object manipulation with humans and we have actualized two different activity models to trigger robot activities. The idea here is to explore collaborative systems and to build up a plan for them to work in a collaborative environment which has many benefits to a single more complex system. In the paper, two robots that cooperate among themselves are constructed. The participation linking the two robotic arms, the torque required and parameters are analyzed. Thus the purpose of this paper is to demonstrate a modular robot system which can serve as a base on aspects of robotics in collaborative robots using haptics.

  7. Social cobots: Anticipatory decision-making for collaborative robots incorporating unexpected human behaviors

    CSIR Research Space (South Africa)

    Can Görür, O

    2018-03-01

    Full Text Available We propose an architecture as a robot’s decision-making mechanism to anticipate a human’s state of mind, and so plan accordingly during a human-robot collaboration task. At the core of the architecture lies a novel stochastic decision...

  8. The Age of Human-Robot Collaboration: Deep Sea Exploration

    KAUST Repository

    Khatib, Oussama

    2018-01-18

    The promise of oceanic discovery has intrigued scientists and explorers for centuries, whether to study underwater ecology and climate change, or to uncover natural resources and historic secrets buried deep at archaeological sites. Reaching these depth is imperative since factors such as pollution and deep-sea trawling increasingly threaten ecology and archaeological sites. These needs demand a system deploying human-level expertise at the depths, and yet remotely operated vehicles (ROVs) are inadequate for the task. To meet the challenge of dexterous operation at oceanic depths, in collaboration with KAUSTメs Red Sea Research Center and MEKA Robotics, Oussama Khatib and the team developed Ocean One, a bimanual humanoid robot that brings immediate and intuitive haptic interaction to oceanic environments. Introducing Ocean One, the haptic robotic avatar During this lecture, Oussama Khatib will talk about how teaming with the French Ministry of Cultureメs Underwater Archaeology Research Department, they deployed Ocean One in an expedition in the Mediterranean to Louis XIVメs flagship Lune, lying off the coast of Toulon at ninety-one meters. In the spring of 2016, Ocean One became the first robotic avatar to embody a humanメs presence at the seabed. Ocean Oneメs journey in the Mediterranean marks a new level of marine exploration: Much as past technological innovations have impacted society, Ocean Oneメs ability to distance humans physically from dangerous and unreachable work spaces while connecting their skills, intuition, and experience to the task promises to fundamentally alter remote work. Robotic avatars will search for and acquire materials, support equipment, build infrastructure, and perform disaster prevention and recovery operations - be it deep in oceans and mines, at mountain tops, or in space.

  9. Model-based systems engineering to design collaborative robotics applications

    NARCIS (Netherlands)

    Hernandez Corbato, Carlos; Fernandez-Sanchez, Jose Luis; Rassa, Bob; Carbone, Paolo

    2017-01-01

    Novel robot technologies are becoming available to automate more complex tasks, more flexibly, and collaborating with humans. Methods and tools are needed in the automation and robotics industry to develop and integrate this new breed of robotic systems. In this paper, the ISE&PPOOA

  10. Novel trends in the assembly process as the results of human – the industrial robot collaboration

    Directory of Open Access Journals (Sweden)

    Holubek Radovan

    2017-01-01

    Full Text Available The contribution is focused on the creation of an idea proposal and simulation of the assembly system in cooperation of the human and the industrial robot. The aim of the research is to verify the feasibility of this cooperation between the human and the industrial robot on the basis of the created simulation in the assembly process. The important step of the design this collaboration is the determination of rules and safety of this cooperation. The paper also presents the method of working with the selected software and its functionalities and sequence of steps at the simulation creation. The objective of the research is the evaluation of the idea proposal of the collaborative assembly system on the basis of the created simulation. The analysis and evaluation of the simulation confirm the feasibility and safety of the cooperation of the man and robot and also verified the possibility of assembly made by man and robot from the disposition and dimension on point of view of the proposed workplace.

  11. Socially intelligent robots: dimensions of human-robot interaction.

    Science.gov (United States)

    Dautenhahn, Kerstin

    2007-04-29

    Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.

  12. Velocity-curvature patterns limit human-robot physical interaction.

    Science.gov (United States)

    Maurice, Pauline; Huber, Meghan E; Hogan, Neville; Sternad, Dagmar

    2018-01-01

    Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration.

  13. Physical Human Robot Interaction for a Wall Mounting Robot - External Force Estimation

    DEFF Research Database (Denmark)

    Alonso García, Alejandro; Villarmarzo Arruñada, Noelia; Pedersen, Rasmus

    2018-01-01

    The use of collaborative robots enhances human capabilities, leading to better working conditions and increased productivity. In building construction, such robots are needed, among other tasks, to install large glass panels, where the robot takes care of the heavy lifting part of the job while...

  14. Human-Robot Interaction: Status and Challenges.

    Science.gov (United States)

    Sheridan, Thomas B

    2016-06-01

    The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.

  15. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  16. Additive Manufacturing Cloud via Peer-Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Yuan Yao

    2016-05-01

    Full Text Available When building a 3D printing cloud manufacturing platform, self-sensing and collaboration on manufacturing resources present challenging problems. This paper proposes a peer-robot collaboration framework to deal with these issues. Each robot combines heterogeneous additive manufacturing hardware and software, acting as an intelligent agent. Through collaboration with other robots, it forms a dynamic and scalable integration manufacturing system. The entire distributed system is managed by rules that employ an internal rule engine, which supports rule conversion and conflict resolution. Two additive manufacturing service scenarios are designed to analyse the efficiency and scalability of the framework. Experiments show that the presented method performs well in tasks requiring large-scale access to resources and collaboration.

  17. Human-Robot Planetary Exploration Teams

    Science.gov (United States)

    Tyree, Kimberly

    2004-01-01

    areas of our research are safety and crew time efficiency. For safety, our work involves enabling humans to reliably communicate with a robot while moving in the same workspace, and enabling robots to monitor and advise humans of potential problems. Voice, gesture, remote computer control, and enhanced robot intelligence are methods we are studying. For crew time efficiency, we are investigating the effects of assigning different roles to humans and robots in collaborative exploration scenarios.

  18. Developing Principles for Effective Human Collaboration with Free-Flying Robots

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerial robots hold great promise in supporting human activities during space missions and terrestrial operations. For example, free-flying robots may automate...

  19. A Plug and Produce Framework for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2017-01-01

    Collaborative robots are today ever more interesting in response to the increasing need for agile manufacturing equipment. Contrary to traditional industrial robots, collaborative robots are intended for working in dynamic environments alongside the production staff. To cope with the dynamic...... environment and workflow, new configuration and control methods are needed compared to those of traditional industrial robots. The new methods should enable shop floor operators to reconfigure the robot. This article presents a plug and produce framework for industrial collaborative robots. The article...... focuses on the control framework enabling quick and easy exchange of hardware modules as an approach to achieving plug and produce. To solve this, an agent-based system is proposed building on top of the robot operating system. The framework enables robot operating system packages to be adapted...

  20. Mentoring console improves collaboration and teaching in surgical robotics.

    Science.gov (United States)

    Hanly, Eric J; Miller, Brian E; Kumar, Rajesh; Hasser, Christopher J; Coste-Maniere, Eve; Talamini, Mark A; Aurora, Alexander A; Schenkman, Noah S; Marohn, Michael R

    2006-10-01

    One of the most significant limitations of surgical robots has been their inability to allow multiple surgeons and surgeons-in-training to engage in collaborative control of robotic surgical instruments. We report the initial experience with a novel two-headed da Vinci surgical robot that has two collaborative modes: the "swap" mode allows two surgeons to simultaneously operate and actively swap control of the robot's four arms, and the "nudge" mode allows them to share control of two of the robot's arms. The utility of the mentoring console operating in its two collaborative modes was evaluated through a combination of dry laboratory exercises and animal laboratory surgery. The results from surgeon-resident collaborative performance of complex three-handed surgical tasks were compared to results from single-surgeon and single-resident performance. Statistical significance was determined using Student's t-test. Collaborative surgeon-resident swap control reduced the time to completion of complex three-handed surgical tasks by 25% compared to single-surgeon operation of a four-armed da Vinci (P nudge mode was particularly useful for guiding a resident's hands during crucially precise steps of an operation (such as proper placement of stitches). The da Vinci mentoring console greatly facilitates surgeon collaboration during robotic surgery and improves the performance of complex surgical tasks. The mentoring console has the potential to improve resident participation in surgical robotics cases, enhance resident education in surgical training programs engaged in surgical robotics, and improve patient safety during robotic surgery.

  1. Easy Reconfiguration of Modular Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper

    2016-01-01

    the production staff collaborating to perform common tasks. This change of environment imposes a much more dynamic lifecycle for the robot which consequently requires new ways of interacting. This thesis investigates how the changeover to a new task on a collaborative robot can be performed by the shop floor...... operators already working alongside the robot. To effectively perform this changeover, the operator must both reconfigure the hardware of the robot and reprogram the robot to match the new task. To enable shop floor operators to quickly and intuitively program the robot, this thesis proposes the use...... of parametric, task-related robot skills with a manual parameterization method. Reconfiguring the hardware entails adding, removing, or modifying some of the robot’s components. This thesis investigate how software configurator tools can aid the operator in selecting appropriate hardware modules, and how agent...

  2. Towards the Verification of Human-Robot Teams

    Science.gov (United States)

    Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.

    2005-01-01

    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.

  3. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Mazen Ghandour

    2017-06-01

    Full Text Available In this paper, a novel approach for collision avoidance for indoor mobile robots based on human-robot interaction is realized. The main contribution of this work is a new technique for collision avoidance by engaging the human and the robot in generating new collision-free paths. In mobile robotics, collision avoidance is critical for the success of the robots in implementing their tasks, especially when the robots navigate in crowded and dynamic environments, which include humans. Traditional collision avoidance methods deal with the human as a dynamic obstacle, without taking into consideration that the human will also try to avoid the robot, and this causes the people and the robot to get confused, especially in crowded social places such as restaurants, hospitals, and laboratories. To avoid such scenarios, a reactive-supervised collision avoidance system for mobile robots based on human-robot interaction is implemented. In this method, both the robot and the human will collaborate in generating the collision avoidance via interaction. The person will notify the robot about the avoidance direction via interaction, and the robot will search for the optimal collision-free path on the selected direction. In case that no people interacted with the robot, it will select the navigation path autonomously and select the path that is closest to the goal location. The humans will interact with the robot using gesture recognition and Kinect sensor. To build the gesture recognition system, two models were used to classify these gestures, the first model is Back-Propagation Neural Network (BPNN, and the second model is Support Vector Machine (SVM. Furthermore, a novel collision avoidance system for avoiding the obstacles is implemented and integrated with the HRI system. The system is tested on H20 robot from DrRobot Company (Canada and a set of experiments were implemented to report the performance of the system in interacting with the human and avoiding

  4. Collaborative Robots and Knowledge Management - A Short Review

    Science.gov (United States)

    Mușat, Flaviu-Constantin; Mihu, Florin-Constantin

    2017-12-01

    Because the requirements of the customers are more and more high related to quality, quantity, delivery times at lowest costs possible, the industry had to come with automated solutions to improve these requirements. Starting from the automated lines developed by Ford and Toyota, we have now developed automated and self-sustained working lines, which is possible nowadays-using collaborative robots. By using the knowledge management system we can improve the development of the future of this kind of area of research. This paper shows the benefits and the smartness use of the robots that are performing the manipulation activities that increases the work place ergonomically and improve the interaction between human - machine in order to assist in parallel tasks and lowering the physically human efforts.

  5. Effective Human-Robot Collaborative Work for Critical Missions

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to improve human-robot interaction (HRI) in order to enhance the capability of NASA critical missions. This research will focus two...

  6. A Preliminary Study of Peer-to-Peer Human-Robot Interaction

    Science.gov (United States)

    Fong, Terrence; Flueckiger, Lorenzo; Kunz, Clayton; Lees, David; Schreiner, John; Siegel, Michael; Hiatt, Laura M.; Nourbakhsh, Illah; Simmons, Reid; Ambrose, Robert

    2006-01-01

    The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.

  7. Implementing Speed and Separation Monitoring in Collaborative Robot Workcells

    Science.gov (United States)

    Marvel, Jeremy A.; Norcross, Rick

    2016-01-01

    We provide an overview and guidance for the Speed and Separation Monitoring methodology as presented in the International Organization of Standardization's technical specification 15066 on collaborative robot safety. Such functionality is provided by external, intelligent observer systems integrated into a robotic workcell. The SSM minimum protective distance function equation is discussed in detail, with consideration for the input values, implementation specifications, and performance expectations. We provide analytical analyses and test results of the current equation, discuss considerations for implementing SSM in human-occupied environments, and provide directions for technological advancements toward standardization. PMID:27885312

  8. Collaboration Layer for Robots in Mobile Ad-hoc Networks

    DEFF Research Database (Denmark)

    Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e

    2009-01-01

    In many applications multiple robots in Mobile Ad-hoc Networks are required to collaborate in order to solve a task. This paper shows by proof of concept that a Collaboration Layer can be modelled and designed to handle the collaborative communication, which enables robots in small to medium size...

  9. The Creation of a Multi-Human, Multi-Robot Interactive Jam Session

    OpenAIRE

    Weinberg, Gil; Blosser, Brian; Mallikarjuna, Trishul; Raman, Aparna

    2009-01-01

    This paper presents an interactive and improvisational jam session, including human players and two robotic musicians. The project was developed in an effort to create novel and inspiring music through human-robot collaboration. The jam session incorporates Shimon, a newly-developed socially-interactive robotic marimba player, and Haile, a perceptual robotic percussionist developed in previous work. The paper gives an overview of the musical perception modules, adaptive improvisation modes an...

  10. 1st AAU Workshop on Human-Centered Robotics

    DEFF Research Database (Denmark)

    The 2012 AAU Workshop on Human-Centered Robotics took place on 15 Nov. 2012, at Aalborg University, Aalborg. The workshop provides a platform for robotics researchers, including professors, PhD and Master students to exchange their ideas and latest results. The objective is to foster closer...... interaction among researchers from multiple relevant disciplines in the human-centered robotics, and consequently, to promote collaborations across departments of all faculties towards making our center a center of excellence in robotics. The workshop becomes a great success, with 13 presentations, attracting...... more than 45 participants from AAU, SDU, DTI and industrial companies as well. The proceedings contain 7 full papers selected out from the full papers submitted afterwards on the basis of workshop abstracts. The papers represent major research development of robotics at AAU, including medical robots...

  11. Collaborative Tools for Mixed Teams of Humans and Robots

    National Research Council Canada - National Science Library

    Bruemmer, David J; Walton, Miles C

    2003-01-01

    .... Our approach has been to consider the air vehicles, ground robots and humans as team members with different levels of authority, different communication, processing, power and mobility capabilities...

  12. Safe Human-Robot Cooperation in an Industrial Environment

    Directory of Open Access Journals (Sweden)

    Nicola Pedrocchi

    2013-01-01

    Full Text Available The standard EN ISO10218 is fostering the implementation of hybrid production systems, i.e., production systems characterized by a close relationship among human operators and robots in cooperative tasks. Human-robot hybrid systems could have a big economic benefit in small and medium sized production, even if this new paradigm introduces mandatory, challenging safety aspects. Among various requirements for collaborative workspaces, safety-assurance involves two different application layers; the algorithms enabling safe space-sharing between humans and robots and the enabling technologies allowing acquisition data from sensor fusion and environmental data analysing. This paper addresses both the problems: a collision avoidance strategy allowing on-line re-planning of robot motion and a safe network of unsafe devices as a suggested infrastructure for functional safety achievement.

  13. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized...

  14. Skill Based Instruction of Collaborative Robots in Industrial Settings

    DEFF Research Database (Denmark)

    Schou, Casper; Andersen, Rasmus Skovgaard; Chrysostomou, Dimitrios

    2018-01-01

    During the past decades increasing need for more flexible and agile manufacturing equipment has spawned a growing interest in collaborative robots. Contrary to traditional industrial robots, collaborative robots are intended for operating alongside the production personnel in dynamic or semi...... several user studies, the usability of SBS and the task level programming approach has been demonstrated. SBS has been utilized in several international research projects where SBS has been deployed and tested in three real manufacturing settings. Collectively, the industrial exploitations have...

  15. Centralised versus Decentralised Control Reconfiguration for Collaborating Underwater Robots

    DEFF Research Database (Denmark)

    Furno, Lidia; Nielsen, Mikkel Cornelius; Blanke, Mogens

    2015-01-01

    The present paper introduces an approach to fault-tolerant reconfiguration for collaborating underwater robots. Fault-tolerant reconfiguration is obtained using the virtual actuator approach, Steen (2005). The paper investigates properties of a centralised versus a decentralised implementation an...... an underwater drill needs to be transported and positioned by three collaborating robots as part of an underwater autonomous operation....

  16. Collaboration Layer for Robots in Mobile Ad-hoc Networks

    DEFF Research Database (Denmark)

    Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e

    2009-01-01

    networks to solve tasks collaboratively. In this proposal the Collaboration Layer is modelled to handle service and position discovery, group management, and synchronisation among robots, but the layer is also designed to be extendable. Based on this model of the Collaboration Layer, generic services...... are provided to the application running on the robot. The services are generic because they can be used by many different applications, independent of the task to be solved. Likewise, specific services are requested from the underlying Virtual Machine, such as broadcast, multicast, and reliable unicast....... A prototype of the Collaboration Layer has been developed to run in a simulated environment and tested in an evaluation scenario. In the scenario five robots solve the tasks of vacuum cleaning and entrance guarding, which involves the ability to discover potential co-workers, form groups, shift from one group...

  17. Designing, developing, and deploying systems to support human-robot teams in disaster response

    NARCIS (Netherlands)

    Kruijff, G.J.M.; Kruijff-Korbayová, I.; Keshavdas, S.; Larochelle, B.; Janíček, M.; Colas, F.; Liu, M.; Pomerleau, F.; Siegwart, R.; Neerincx, M.A.; Looije, R.; Smets, N.J.J.M.; Mioch, T.; Diggelen, J. van; Pirri, F.; Gianni, M.; Ferri, F.; Menna, M.; Worst, R.; Linder, T.; Tretyakov, V.; Surmann, H.; Svoboda, T.; Reinštein, M.; Zimmermann, K.; Petříček, T.; Hlaváč, V.

    2014-01-01

    This paper describes our experience in designing, developing and deploying systems for supporting human-robot teams during disaster response. It is based on R&D performed in the EU-funded project NIFTi. NIFTi aimed at building intelligent, collaborative robots that could work together with humans in

  18. Sharing skills: using augmented reality for human-robot collaboration

    Science.gov (United States)

    Giesler, Bjorn; Steinhaus, Peter; Walther, Marcus; Dillmann, Ruediger

    2004-05-01

    Both stationary 'industrial' and autonomous mobile robots nowadays pervade many workplaces, but human-friendly interaction with them is still very much an experimental subject. One of the reasons for this is that computer and robotic systems are very bad at performing certain tasks well and robust. A prime example is classification of sensor readings: Which part of a 3D depth image is the cup, which the saucer, which the table? These are tasks that humans excel at. To alleviate this problem, we propose a team approah, wherein the robot records sensor data and uses an Augmented-Reality (AR) system to present the data to the user directly in the 3D environment. The user can then perform classification decisions directly on the data by pointing, gestures and speech commands. After the classification has been performed by the user, the robot takes the classified data and matches it to its environment model. As a demonstration of this approach, we present an initial system for creating objects on-the-fly in the environment model. A rotating laser scanner is used to capture a 3D snapshot of the environment. This snapshot is presented to the user as an overlay over his view of the scene. The user classifies unknown objects by pointing at them. The system segments the snapshot according to the user's indications and presents the results of segmentation back to the user, who can then inspect, correct and enhance them interactively. After a satisfying result has been reached, the laser-scanner can take more snapshots from other angles and use the previous segmentation hints to construct a 3D model of the object.

  19. Safe human-robot cooperation in an industrial environment

    OpenAIRE

    Pedrocchi N.; Vicentini F.; Matteo M.; Tosatti L.M.

    2013-01-01

    The standard EN ISO10218 is fostering the implementation of hybrid production systems, i.e., production systems characterized by a close relationship among human operators and robots in cooperative tasks. Human‐robot hybrid systems could have a big economic benefit in small and medium sized production, even if this new paradigm introduces mandatory, challenging safety aspects. Among various requirements for collaborative workspaces, safety‐assurance involves two different application layers; ...

  20. Next generation light robotic

    DEFF Research Database (Denmark)

    Villangca, Mark Jayson; Palima, Darwin; Banas, Andrew Rafael

    2017-01-01

    -assisted surgery imbibes surgeons with superhuman abilities and gives the expression “surgical precision” a whole new meaning. Still in its infancy, much remains to be done to improve human-robot collaboration both in realizing robots that can operate safely with humans and in training personnel that can work......Conventional robotics provides machines and robots that can replace and surpass human performance in repetitive, difficult, and even dangerous tasks at industrial assembly lines, hazardous environments, or even at remote planets. A new class of robotic systems no longer aims to replace humans...... with so-called automatons but, rather, to create robots that can work alongside human operators. These new robots are intended to collaborate with humans—extending their abilities—from assisting workers on the factory floor to rehabilitating patients in their homes. In medical robotics, robot...

  1. Robotic and Human-Tended Collaborative Drilling Automation for Subsurface Exploration

    Science.gov (United States)

    Glass, Brian; Cannon, Howard; Stoker, Carol; Davis, Kiel

    2005-01-01

    Future in-situ lunar/martian resource utilization and characterization, as well as the scientific search for life on Mars, will require access to the subsurface and hence drilling. Drilling on Earth is hard - an art form more than an engineering discipline. Human operators listen and feel drill string vibrations coming from kilometers underground. Abundant mass and energy make it possible for terrestrial drilling to employ brute-force approaches to failure recovery and system performance issues. Space drilling will require intelligent and autonomous systems for robotic exploration and to support human exploration. Eventual in-situ resource utilization will require deep drilling with probable human-tended operation of large-bore drills, but initial lunar subsurface exploration and near-term ISRU will be accomplished with lightweight, rover-deployable or standalone drills capable of penetrating a few tens of meters in depth. These lightweight exploration drills have a direct counterpart in terrestrial prospecting and ore-body location, and will be designed to operate either human-tended or automated. NASA and industry now are acquiring experience in developing and building low-mass automated planetary prototype drills to design and build a pre-flight lunar prototype targeted for 2011-12 flight opportunities. A successful system will include development of drilling hardware, and automated control software to operate it safely and effectively. This includes control of the drilling hardware, state estimation of both the hardware and the lithography being drilled and state of the hole, and potentially planning and scheduling software suitable for uncertain situations such as drilling. Given that Humans on the Moon or Mars are unlikely to be able to spend protracted EVA periods at a drill site, both human-tended and robotic access to planetary subsurfaces will require some degree of standalone, autonomous drilling capability. Human-robotic coordination will be important

  2. Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle

    OpenAIRE

    Weistroffer , Vincent

    2014-01-01

    Either in the context of the industry or of the everyday life, robots are becoming more and more present in our environment and are nowadays able to interact with humans. In industrial environments, robots now assist operators on the assembly lines for difficult and dangerous tasks. Then, robots and operators need to share the same physical space (copresence) and to manage common tasks (collaboration). On the one side, the safety of humans working near robots has to be guaranteed at all time....

  3. A Distributed Tactile Sensor for Intuitive Human-Robot Interfacing

    Directory of Open Access Journals (Sweden)

    Andrea Cirillo

    2017-01-01

    Full Text Available Safety of human-robot physical interaction is enabled not only by suitable robot control strategies but also by suitable sensing technologies. For example, if distributed tactile sensors were available on the robot, they could be used not only to detect unintentional collisions, but also as human-machine interface by enabling a new mode of social interaction with the machine. Starting from their previous works, the authors developed a conformable distributed tactile sensor that can be easily conformed to the different parts of the robot body. Its ability to estimate contact force components and to provide a tactile map with an accurate spatial resolution enables the robot to handle both unintentional collisions in safe human-robot collaboration tasks and intentional touches where the sensor is used as human-machine interface. In this paper, the authors present the characterization of the proposed tactile sensor and they show how it can be also exploited to recognize haptic tactile gestures, by tailoring recognition algorithms, well known in the image processing field, to the case of tactile images. In particular, a set of haptic gestures has been defined to test three recognition algorithms on a group of 20 users. The paper demonstrates how the same sensor originally designed to manage unintentional collisions can be successfully used also as human-machine interface.

  4. Multiagent Modeling and Simulation in Human-Robot Mission Operations Work System Design

    Science.gov (United States)

    Sierhuis, Maarten; Clancey, William J.; Sims, Michael H.; Shafto, Michael (Technical Monitor)

    2001-01-01

    This paper describes a collaborative multiagent modeling and simulation approach for designing work systems. The Brahms environment is used to model mission operations for a semi-autonomous robot mission to the Moon at the work practice level. It shows the impact of human-decision making on the activities and energy consumption of a robot. A collaborative work systems design methodology is described that allows informal models, created with users and stakeholders, to be used as input to the development of formal computational models.

  5. Human-Robot Teaming in a Multi-Agent Space Assembly Task

    Science.gov (United States)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.

  6. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  7. The future African workplace: The use of collaborative robots in manufacturing

    Directory of Open Access Journals (Sweden)

    Andre P. Calitz

    2017-07-01

    Full Text Available Orientation: Industry 4.0 promotes technological innovations and human–robot collaboration (HRC. Human–robot interaction (HRI and HRC on the manufacturing assembly line have been implemented in numerous advanced production environments worldwide. Collaborative robots (Cobots are increasingly being used as collaborators with humans in factory production and assembly environments. Research purpose: The purpose of the research is to investigate the current use and future implementation of Cobots worldwide and its specific impact on the African workforce. Motivation for the study: Exploring the gap that exists between the international implementation of Cobots and the potential implementation and impact on the African manufacturing and assembly environment and specifically on the African workforce. Research design, approach and method: The study features a qualitative research design. An open-ended question survey was conducted amongst leading manufacturing companies in South Africa in order to determine the status and future implementation of Cobot practices. Thematic analysis and content analysis were conducted using AtlasTi. Main findings: The findings indicate that the African businesses were aware of the international business trends, regarding Cobot implementation, and the possible impact of Cobots on the African work force. Factors specifically highlighted in this study are fear of retrenchment, human–Cobot trust and the African culture. Practical implications and value-add: This study provides valuable background on the international status of Cobot implementation and the possible impact on the African workforce. The study highlights the importance of building employee trust, providing the relevant training and addressing the fear of retrenchment amongst employees.

  8. Generating human-like movements on an anthropomorphic robot using an interior point method

    Science.gov (United States)

    Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.

    2013-10-01

    In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.

  9. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  10. A Collaborative Approach for Surface Inspection Using Aerial Robots and Computer Vision

    Directory of Open Access Journals (Sweden)

    Martin Molina

    2018-03-01

    Full Text Available Aerial robots with cameras on board can be used in surface inspection to observe areas that are difficult to reach by other means. In this type of problem, it is desirable for aerial robots to have a high degree of autonomy. A way to provide more autonomy would be to use computer vision techniques to automatically detect anomalies on the surface. However, the performance of automated visual recognition methods is limited in uncontrolled environments, so that in practice it is not possible to perform a fully automatic inspection. This paper presents a solution for visual inspection that increases the degree of autonomy of aerial robots following a semi-automatic approach. The solution is based on human-robot collaboration in which the operator delegates tasks to the drone for exploration and visual recognition and the drone requests assistance in the presence of uncertainty. We validate this proposal with the development of an experimental robotic system using the software framework Aerostack. The paper describes technical challenges that we had to solve to develop such a system and the impact on this solution on the degree of autonomy to detect anomalies on the surface.

  11. Human-Robot Interaction and Human Self-Realization

    DEFF Research Database (Denmark)

    Nørskov, Marco

    2014-01-01

    is to test the basis for this type of discrimination when it comes to human-robot interaction. Furthermore, the paper will take Heidegger's warning concerning technology as a vantage point and explore the possibility of human-robot interaction forming a praxis that might help humans to be with robots beyond...

  12. Movement coordination in applied human-human and human-robot interaction

    DEFF Research Database (Denmark)

    Schubö, Anna; Vesper, Cordula; Wiesbeck, Mathey

    2007-01-01

    and describing human-human interaction in terms of goal-oriented movement coordination is considered an important and necessary step for designing and describing human-robot interaction. In the present scenario, trajectories of hand and finger movements were recorded while two human participants performed......The present paper describes a scenario for examining mechanisms of movement coordination in humans and robots. It is assumed that coordination can best be achieved when behavioral rules that shape movement execution in humans are also considered for human-robot interaction. Investigating...... coordination were affected. Implications for human-robot interaction are discussed....

  13. From Human-Computer Interaction to Human-Robot Social Interaction

    OpenAIRE

    Toumi, Tarek; Zidani, Abdelmadjid

    2014-01-01

    Human-Robot Social Interaction became one of active research fields in which researchers from different areas propose solutions and directives leading robots to improve their interactions with humans. In this paper we propose to introduce works in both human robot interaction and human computer interaction and to make a bridge between them, i.e. to integrate emotions and capabilities concepts of the robot in human computer model to become adequate for human robot interaction and discuss chall...

  14. Visual and tactile interfaces for bi-directional human robot communication

    Science.gov (United States)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  15. Social robots from a human perspective

    CERN Document Server

    Taipale, Sakari; Sapio, Bartolomeo; Lugano, Giuseppe; Fortunati, Leopoldina

    2015-01-01

    Addressing several issues that explore the human side of social robots, this book asks from a social and human scientific perspective what a social robot is and how we might come to think about social robots in the different areas of everyday life. Organized around three sections that deal with Perceptions and Attitudes to Social Robots, Human Interaction with Social Robots, and Social Robots in Everyday Life, the book explores the idea that even if technical problems related to robot technologies can be continuously solved from a machine perspective, what kind of machine do we want to have and use in our daily lives? Experiences from previously widely adopted technologies, such smartphones, hint that robot technologies could potentially be absorbed into the everyday lives of humans in such a way that it is the human that determines the human-machine interaction. In a similar way to how today’s information and communication technologies were first designed for professional/industrial use, but which soon wer...

  16. The Human-Robot Interaction Operating System

    Science.gov (United States)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  17. Multi-Axis Force Sensor for Human-Robot Interaction Sensing in a Rehabilitation Robotic Device.

    Science.gov (United States)

    Grosu, Victor; Grosu, Svetlana; Vanderborght, Bram; Lefeber, Dirk; Rodriguez-Guerrero, Carlos

    2017-06-05

    Human-robot interaction sensing is a compulsory feature in modern robotic systems where direct contact or close collaboration is desired. Rehabilitation and assistive robotics are fields where interaction forces are required for both safety and increased control performance of the device with a more comfortable experience for the user. In order to provide an efficient interaction feedback between the user and rehabilitation device, high performance sensing units are demanded. This work introduces a novel design of a multi-axis force sensor dedicated for measuring pelvis interaction forces in a rehabilitation exoskeleton device. The sensor is conceived such that it has different sensitivity characteristics for the three axes of interest having also movable parts in order to allow free rotations and limit crosstalk errors. Integrated sensor electronics make it easy to acquire and process data for a real-time distributed system architecture. Two of the developed sensors are integrated and tested in a complex gait rehabilitation device for safe and compliant control.

  18. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  19. Emotion based human-robot interaction

    Directory of Open Access Journals (Sweden)

    Berns Karsten

    2018-01-01

    Full Text Available Human-machine interaction is a major challenge in the development of complex humanoid robots. In addition to verbal communication the use of non-verbal cues such as hand, arm and body gestures or mimics can improve the understanding of the intention of the robot. On the other hand, by perceiving such mechanisms of a human in a typical interaction scenario the humanoid robot can adapt its interaction skills in a better way. In this work, the perception system of two social robots, ROMAN and ROBIN of the RRLAB of the TU Kaiserslautern, is presented in the range of human-robot interaction.

  20. Influence of facial feedback during a cooperative human-robot task in schizophrenia.

    Science.gov (United States)

    Cohen, Laura; Khoramshahi, Mahdi; Salesse, Robin N; Bortolon, Catherine; Słowiński, Piotr; Zhai, Chao; Tsaneva-Atanasova, Krasimira; Di Bernardo, Mario; Capdevielle, Delphine; Marin, Ludovic; Schmidt, Richard C; Bardy, Benoit G; Billard, Aude; Raffard, Stéphane

    2017-11-03

    Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.

  1. Humans and Robots. Educational Brief.

    Science.gov (United States)

    National Aeronautics and Space Administration, Washington, DC.

    This brief discusses human movement and robotic human movement simulators. The activity for students in grades 5-12 provides a history of robotic movement and includes making an End Effector for the robotic arms used on the Space Shuttle and the International Space Station (ISS). (MVL)

  2. Robot, human and communication; Robotto/ningen/comyunikeshon

    Energy Technology Data Exchange (ETDEWEB)

    Suehiro, T.

    1996-04-10

    Recently, some interests on the robots working with human beings under the same environment as the human beings and living with the human beings were promoting. In such robots, more suitability for environment and more robustness of system are required than those in conventional robots. Above all, communication of both the human beings and the robots on their cooperations is becoming a new problem. Hitherto, for the industrial robot, cooperation between human beings and robot was limited on its programming. As this was better for repeated operation of the same motion, its adoptable work was limited to some comparatively simpler one in factory and was difficult to change its content partially or to apply the other work. Furthermore, on the remote-controlled intelligent work robot represented by the critical work robot, its cooperation between the human beings and the robot can be conducted with the operation at remote location. In this paper, the communication of the robots lived with the human beings was examined. 17 refs., 1 fig.

  3. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  4. Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2009-01-01

    Abstract—This paper introduces a new method to determine a person’s pose based on laser range measurements. Such estimates are typically a prerequisite for any human-aware robot navigation, which is the basis for effective and timeextended interaction between a mobile robot and a human. The robot......’s pose. The resulting pose estimates are used to identify humans who wish to be approached and interacted with. The interaction motion of the robot is based on adaptive potential functions centered around the person that respect the persons social spaces. The method is tested in experiments...

  5. Human-Robot Collaboration Dynamic Impact Testing and Calibration Instrument for Disposable Robot Safety Artifacts.

    Science.gov (United States)

    Dagalakis, Nicholas G; Yoo, Jae Myung; Oeste, Thomas

    2016-01-01

    The Dynamic Impact Testing and Calibration Instrument (DITCI) is a simple instrument with a significant data collection and analysis capability that is used for the testing and calibration of biosimulant human tissue artifacts. These artifacts may be used to measure the severity of injuries caused in the case of a robot impact with a human. In this paper we describe the DITCI adjustable impact and flexible foundation mechanism, which allows the selection of a variety of impact force levels and foundation stiffness. The instrument can accommodate arrays of a variety of sensors and impact tools, simulating both real manufacturing tools and the testing requirements of standards setting organizations. A computer data acquisition system may collect a variety of impact motion, force, and torque data, which are used to develop a variety of mathematical model representations of the artifacts. Finally, we describe the fabrication and testing of human abdomen soft tissue artifacts, used to display the magnitude of impact tissue deformation. Impact tests were performed at various maximum impact force and average pressure levels.

  6. Anthropomorphism in Human-Robot Co-evolution.

    Science.gov (United States)

    Damiano, Luisa; Dumouchel, Paul

    2018-01-01

    Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents - social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots "social presence" and "social behaviors" that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of 'applied anthropomorphism' as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a "cheating" technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns "anthropomorphism-based" social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, "synthetic ethics," which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.

  7. Robots and humans: synergy in planetary exploration

    Science.gov (United States)

    Landis, Geoffrey A.

    2004-01-01

    How will humans and robots cooperate in future planetary exploration? Are humans and robots fundamentally separate modes of exploration, or can humans and robots work together to synergistically explore the solar system? It is proposed that humans and robots can work together in exploring the planets by use of telerobotic operation to expand the function and usefulness of human explorers, and to extend the range of human exploration to hostile environments. Published by Elsevier Ltd.

  8. Humanoid Robots and Human Society

    OpenAIRE

    Bahishti, Adam A

    2017-01-01

    Almost every aspect of modern human life starting from the smartphone to the smart houses you live in has been influenced by science and technology. The field of science and technology has advanced throughout the last few decades. Among those advancements, robots have become significant by managing most of our day-to-day tasks and trying to get close to human lives. As robotics and autonomous systems flourish, human-robot relationships are becoming increasingly important. Recently humanoid ro...

  9. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control

    Science.gov (United States)

    Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash

    2012-06-01

    This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.

  10. Robots for Astrobiology!

    Science.gov (United States)

    Boston, Penelope J.

    2016-01-01

    The search for life and its study is known as astrobiology. Conducting that search on other planets in our Solar System is a major goal of NASA and other space agencies, and a driving passion of the community of scientists and engineers around the world. We practice for that search in many ways, from exploring and studying extreme environments on Earth, to developing robots to go to other planets and help us look for any possible life that may be there or may have been there in the past. The unique challenges of space exploration make collaborations between robots and humans essential. The products of those collaborations will be novel and driven by the features of wholly new environments. For space and planetary environments that are intolerable for humans or where humans present an unacceptable risk to possible biologically sensitive sites, autonomous robots or telepresence offer excellent choices. The search for life signs on Mars fits within this category, especially in advance of human landed missions there, but also as assistants and tools once humans reach the Red Planet. For planetary destinations where we do not envision humans ever going in person, like bitterly cold icy moons, or ocean worlds with thick ice roofs that essentially make them planetary-sized ice caves, we will rely on robots alone to visit those environments for us and enable us to explore and understand any life that we may find there. Current generation robots are not quite ready for some of the tasks that we need them to do, so there are many opportunities for roboticists of the future to advance novel types of mobility, autonomy, and bio-inspired robotic designs to help us accomplish our astrobiological goals. We see an exciting partnership between robotics and astrobiology continually strengthening as we jointly pursue the quest to find extraterrestrial life.

  11. Framework to Implement Collaborative Robots in Manual Assembly: A Lean Automation Approach

    DEFF Research Database (Denmark)

    Malik, Ali Ahmad; Bilberg, Arne

    The recent proliferation of smart manufacturing technologies has emerged the concept of hybrid automation for assembly systems utilizing the best of humans and robots in a combination. Based on the ability to work alongside human-workers the next generation of industrial robots (or robotics 2...... of virtual simulations is discussed for validation and optimization of human-robot work environment....

  12. A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  13. Human futures amongst robot teachers?

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Bhroin, Niamh Ni; Ess, Charles Melvin

    2017-01-01

    In 2009 the world’s first robot teacher, Saya, was introduced into a classroom. Saya could express six basic emotions and shout orders like 'be quiet'. Since 2009, instructional robot technologies have emerged around the world and it is estimated that robot teachers may become a regular...... technological feature in the classroom and even 'take over' from human teachers within the next ten to fifteen years.   The paper set out to examine some of the possible ethical implications for human futures in relation to the immanent rise of robot teachers. This is done through combining perspectives...... on technology coming from design, science and technology, education, and philosophy (McCarthy & Wright, 2004; Jasanoff, 2016; Selwyn 2016; Verbeek, 2011). The framework calls attention to how particular robot teachers institute certain educational, experiential and existential terrains within which human...

  14. Advanced Technologies for Robotic Exploration Leading to Human Exploration: Results from the SpaceOps 2015 Workshop

    Science.gov (United States)

    Lupisella, Mark L.; Mueller, Thomas

    2016-01-01

    This paper will provide a summary and analysis of the SpaceOps 2015 Workshop all-day session on "Advanced Technologies for Robotic Exploration, Leading to Human Exploration", held at Fucino Space Center, Italy on June 12th, 2015. The session was primarily intended to explore how robotic missions and robotics technologies more generally can help lead to human exploration missions. The session included a wide range of presentations that were roughly grouped into (1) broader background, conceptual, and high-level operations concepts presentations such as the International Space Exploration Coordination Group Roadmap, followed by (2) more detailed narrower presentations such as rover autonomy and communications. The broader presentations helped to provide context and specific technical hooks, and helped lay a foundation for the narrower presentations on more specific challenges and technologies, as well as for the discussion that followed. The discussion that followed the presentations touched on key questions, themes, actions and potential international collaboration opportunities. Some of the themes that were touched on were (1) multi-agent systems, (2) decentralized command and control, (3) autonomy, (4) low-latency teleoperations, (5) science operations, (6) communications, (7) technology pull vs. technology push, and (8) the roles and challenges of operations in early human architecture and mission concept formulation. A number of potential action items resulted from the workshop session, including: (1) using CCSDS as a further collaboration mechanism for human mission operations, (2) making further contact with subject matter experts, (3) initiating informal collaborative efforts to allow for rapid and efficient implementation, and (4) exploring how SpaceOps can support collaboration and information exchange with human exploration efforts. This paper will summarize the session and provide an overview of the above subjects as they emerged from the SpaceOps 2015

  15. Effect of cognitive biases on human-robot interaction: a case study of robot's misattribution

    OpenAIRE

    Biswas, Mriganka; Murray, John

    2014-01-01

    This paper presents a model for developing long-term human-robot interactions and social relationships based on the principle of 'human' cognitive biases applied to a robot. The aim of this work is to study how a robot influenced with human ‘misattribution’ helps to build better human-robot interactions than unbiased robots. The results presented in this paper suggest that it is important to know the effect of cognitive biases in human characteristics and interactions in order to better u...

  16. Feasibility study on use of virtual collaborator for remote NPP control

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Lee, Seung Jun; Seong, Poong Hyun

    2001-01-01

    In this paper, we study the feasibility of Virtual Collaborator for Remote NPP Control as long-term research theme. And we present similar and related researches that are fulfilled at I and C laboratory in nuclear department of KAIST. Yoshikawa's laboratory, Kyoto University in Japan, is developing 'virtual collaborator', agent robot, which realized in virtual reality. Virtual Collaborator is a new type of human-machine interface which works as 'intelligent interface agent' to help machine operators manipulating large scale machine system such as power plant. The Virtual Collaborator is a sort of 'virtual robot' which behaves as if an intelligent agent robot in virtual space, who can communicate naturally with human like humans do with each other

  17. Human-Robot Interaction in High Vulnerability Domains

    Science.gov (United States)

    Gore, Brian F.

    2016-01-01

    Future NASA missions will require successful integration of the human with highly complex systems. Highly complex systems are likely to involve humans, automation, and some level of robotic assistance. The complex environments will require successful integration of the human with automation, with robots, and with human-automation-robot teams to accomplish mission critical goals. Many challenges exist for the human performing in these types of operational environments with these kinds of systems. Systems must be designed to optimally integrate various levels of inputs and outputs based on the roles and responsibilities of the human, the automation, and the robots; from direct manual control, shared human-robotic control, or no active human control (i.e. human supervisory control). It is assumed that the human will remain involved at some level. Technologies that vary based on contextual demands and on operator characteristics (workload, situation awareness) will be needed when the human integrates into these systems. Predictive models that estimate the impact of the technologies on the system performance and the on the human operator are also needed to meet the challenges associated with such future complex human-automation-robot systems in extreme environments.

  18. User localization during human-robot interaction.

    Science.gov (United States)

    Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A

    2012-01-01

    This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.

  19. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Michael Jae-Yoon Chung

    Full Text Available A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i learn probabilistic models of actions through self-discovery and experience, (ii utilize these learned models for inferring the goals of human actions, and (iii perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i a simulated robot that learns human-like gaze following behavior, and (ii a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  20. Socially Impaired Robots: Human Social Disorders and Robots' Socio-Emotional Intelligence

    OpenAIRE

    Vitale, Jonathan; Williams, Mary-Anne; Johnston, Benjamin

    2016-01-01

    Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots' future capability requirements. We provide evidence supporting...

  1. The ethics of human-robot relationships

    NARCIS (Netherlands)

    de Graaf, M.M.A.

    2015-01-01

    Currently, human-robot interactions are constructed according to the rules of human-human interactions inviting users to interact socially with robots. Is there something morally wrong with deceiving humans into thinking they can foster meaningful interactions with a technological object? Or is this

  2. Collaborative gaming and competition for CS-STEM education using SPHERES Zero Robotics

    Science.gov (United States)

    Nag, Sreeja; Katz, Jacob G.; Saenz-Otero, Alvar

    2013-02-01

    There is widespread investment of resources in the fields of Computer Science, Science, Technology, Engineering, Mathematics (CS-STEM) education to improve STEM interests and skills. This paper addresses the goal of revolutionizing student education using collaborative gaming and competition, both in virtual simulation environments and on real hardware in space. The concept is demonstrated using the SPHERES Zero Robotics (ZR) Program which is a robotics programming competition. The robots are miniature satellites called SPHERES—an experimental test bed developed by the MIT SSL on the International Space Station (ISS) to test navigation, formation flight and control algorithms in microgravity. The participants compete to win a technically challenging game by programming their strategies into the SPHERES satellites, completely from a web browser. The programs are demonstrated in simulation, on ground hardware and then in a final competition when an astronaut runs the student software aboard the ISS. ZR had a pilot event in 2009 with 10 High School (HS) students, a nationwide pilot tournament in 2010 with over 200 HS students from 19 US states, a summer tournament in 2010 with ˜150 middle school students and an open-registration tournament in 2011 with over 1000 HS students from USA and Europe. The influence of collaboration was investigated by (1) building new web infrastructure and an Integrated Development Environment where intensive inter-participant collaboration is possible, (2) designing and programming a game to solve a relevant formation flight problem, collaborative in nature—and (3) structuring a tournament such that inter-team collaboration is mandated. This paper introduces the ZR web tools, assesses the educational value delivered by the program using space and games and evaluates the utility of collaborative gaming within this framework. There were three types of collaborations as variables—within matches (to achieve game objectives), inter

  3. Mobile Robots in Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    intelligent mobile robotic devices capable of being a more natural and sociable actor in a human environment. More specific the emphasis is on safe and natural motion and navigation issues. First part of the work focus on developing a robotic system, which estimates human interest in interacting......, lawn mowers, toy pets, or as assisting technologies for care giving. If we want robots to be an even larger and more integrated part of our every- day environments, they need to become more intelligent, and behave safe and natural to the humans in the environment. This thesis deals with making...... as being able to navigate safely around one person, the robots must also be able to navigate in environments with more people. This can be environments such as pedestrian streets, hospital corridors, train stations or airports. The developed human-aware navigation strategy is enhanced to formulate...

  4. Intelligence for Human-Assistant Planetary Surface Robots

    Science.gov (United States)

    Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.

    2006-01-01

    The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.

  5. Human-machine Interface for Presentation Robot

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Ondroušek, V.

    2012-01-01

    Roč. 6, č. 2 (2012), s. 17-21 ISSN 1897-8649 Institutional research plan: CEZ:AV0Z20760514 Keywords : human-robot interface * mobile robot * presentation robot Subject RIV: JD - Computer Applications, Robotics

  6. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    Science.gov (United States)

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  7. Toward a framework for levels of robot autonomy in human-robot interaction.

    Science.gov (United States)

    Beer, Jenay M; Fisk, Arthur D; Rogers, Wendy A

    2014-07-01

    A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomous systems, influence the way in which humans and robots may interact with one another. Thus, there is a need to understand HRI by identifying variables that influence - and are influenced by - robot autonomy. Our overarching goal is to develop a framework for levels of robot autonomy in HRI. To reach this goal, the framework draws links between HRI and human-automation interaction, a field with a long history of studying and understanding human-related variables. The construct of autonomy is reviewed and redefined within the context of HRI. Additionally, the framework proposes a process for determining a robot's autonomy level, by categorizing autonomy along a 10-point taxonomy. The framework is intended to be treated as guidelines to determine autonomy, categorize the LORA along a qualitative taxonomy, and consider which HRI variables (e.g., acceptance, situation awareness, reliability) may be influenced by the LORA.

  8. Accelerating Robot Development through Integral Analysis of Human-Robot Interaction

    NARCIS (Netherlands)

    Kooijmans, T.; Kanda, T.; Bartneck, C.; Ishiguro, H.; Hagita, N.

    2007-01-01

    The development of interactive robots is a complicated process, involving a plethora of psychological, technical, and contextual influences. To design a robot capable of operating "intelligently" in everyday situations, one needs a profound understanding of human-robot interaction (HRI). We propose

  9. Modeling Leadership Styles in Human-Robot Team Dynamics

    Science.gov (United States)

    Cruz, Gerardo E.

    2005-01-01

    The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.

  10. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko

    2011-01-01

    Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed

  11. Effects of Robot Facial Characteristics and Gender in Persuasive Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Aimi S. Ghazali

    2018-06-01

    Full Text Available The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behavior was logged, and trust and reactance toward the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans is more persuasive, evokes more trust, and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans. Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest robots that are intended to influence human behavior should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot.

  12. A Human-Robot Co-Manipulation Approach Based on Human Sensorimotor Information.

    Science.gov (United States)

    Peternel, Luka; Tsagarakis, Nikos; Ajoudani, Arash

    2017-07-01

    This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human-robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human-robot interaction performance and deliver appropriate level of assistance to the human operator.

  13. Human-Robot Teaming: From Space Robotics to Self-Driving Cars

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    In this talk, I describe how NASA Ames has been developing and testing robots for space exploration. In our research, we have focused on studying how human-robot teams can increase the performance, reduce the cost, and increase the success of space missions. A key tenet of our work is that humans and robots should support one another in order to compensate for limitations of manual control and autonomy. This principle has broad applicability beyond space exploration. Thus, I will conclude by discussing how we have worked with Nissan to apply our methods to self-driving cars, enabling humans to support autonomous vehicles operating in unpredictable and difficult situations.

  14. Human-Robot Teams for Unknown and Uncertain Environments

    Science.gov (United States)

    Fong, Terry

    2015-01-01

    Man-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence.

  15. External Environment Sensing by a Module on Self-reconfiguration Robot

    Science.gov (United States)

    Goto, Tomotsugu; Uchida, Masafumi; Onogaki, Hitoshi

    In the situation in which a robot and a human work together by collaborating with each other, a robot and a human share one working environment, and each interferes in each other. The boundary of each complex dynamic occupation area changes in the connection movement which is the component of collaborative works at this time. The main restraint condition which relates to the robustness of that connection movement is each physical charactristics, that is, the embodiment. A robot body is variability though the embodiment of a human is almost fixed. Therefore, the safe and the robust connection movement is brought when a robot has the robot body which is well suitable for the embodiment of a human. A purpose for this research is that the colaboration works between the self-reconfiguration robot and a human is realized. To achieve this purpose, sensing function of external environment on a module was examined. A module is a component of the self-reconfiguration robot. A robot body vibrates when a module actuates an arm actively. This vibration is observed by using some acceleration sensors. Measured datas reflects a difference of objects that it touches a robot body. In this paper, the sensing technique of external environment which identifies this difference by using the neural network is proposed.

  16. A cadaver study of mastoidectomy using an image-guided human-robot collaborative control system.

    Science.gov (United States)

    Yoo, Myung Hoon; Lee, Hwan Seo; Yang, Chan Joo; Lee, Seung Hwan; Lim, Hoon; Lee, Seongpung; Yi, Byung-Ju; Chung, Jong Woo

    2017-10-01

    Surgical precision would be better achieved with the development of an anatomical monitoring and controlling robot system than by traditional surgery techniques alone. We evaluated the feasibility of robot-assisted mastoidectomy in terms of duration, precision, and safety. Human cadaveric study. We developed a multi-degree-of-freedom robot system for a surgical drill with a balancing arm. The drill system is manipulated by the surgeon, the motion of the drill burr is monitored by the image-guided system, and the brake is controlled by the robotic system. The system also includes an alarm as well as the brake to help avoid unexpected damage to vital structures. Experimental mastoidectomy was performed in 11 temporal bones of six cadavers. Parameters including duration and safety were assessed, as well as intraoperative damage, which was judged via pre- and post-operative computed tomography. The duration of mastoidectomy in our study was comparable with that required for chronic otitis media patients. Although minor damage, such as dura exposure without tearing, was noted, no critical damage to the facial nerve or other important structures was observed. When the brake system was set to 1 mm from the facial nerve, the postoperative average bone thicknesses of the facial nerve was 1.39, 1.41, 1.22, 1.41, and 1.55 mm in the lateral, posterior pyramidal and anterior, lateral, and posterior mastoid portions, respectively. Mastoidectomy can be successfully performed using our robot-assisted system while maintaining a pre-set limit of 1 mm in most cases. This system may thus be useful for more inexperienced surgeons. NA.

  17. Human-Automation Allocations for Current Robotic Space Operations

    Science.gov (United States)

    Marquez, Jessica J.; Chang, Mai L.; Beard, Bettina L.; Kim, Yun Kyung; Karasinski, John A.

    2018-01-01

    Within the Human Research Program, one risk delineates the uncertainty surrounding crew working with automation and robotics in spaceflight. The Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is concerned with the detrimental effects on crew performance due to ineffective user interfaces, system designs and/or functional task allocation, potentially compromising mission success and safety. Risk arises because we have limited experience with complex automation and robotics. One key gap within HARI, is the gap related to functional allocation. The gap states: We need to evaluate, develop, and validate methods and guidelines for identifying human-automation/robot task information needs, function allocation, and team composition for future long duration, long distance space missions. Allocations determine the human-system performance as it identifies the functions and performance levels required by the automation/robotic system, and in turn, what work the crew is expected to perform and the necessary human performance requirements. Allocations must take into account each of the human, automation, and robotic systems capabilities and limitations. Some functions may be intuitively assigned to the human versus the robot, but to optimize efficiency and effectiveness, purposeful role assignments will be required. The role of automation and robotics will significantly change in future exploration missions, particularly as crew becomes more autonomous from ground controllers. Thus, we must understand the suitability of existing function allocation methods within NASA as well as the existing allocations established by the few robotic systems that are operational in spaceflight. In order to evaluate future methods of robotic allocations, we must first benchmark the allocations and allocation methods that have been used. We will present 1) documentation of human-automation-robotic allocations in existing, operational spaceflight systems; and 2) To

  18. The Relationship between Robot's Nonverbal Behaviour and Human's Likability Based on Human's Personality.

    Science.gov (United States)

    Thepsoonthorn, Chidchanok; Ogawa, Ken-Ichiro; Miyake, Yoshihiro

    2018-05-30

    At current state, although robotics technology has been immensely developed, the uncertainty to completely engage in human-robot interaction is still growing among people. Many current studies then started to concern about human factors that might influence human's likability like human's personality, and found that compatibility between human's and robot's personality (expressions of personality characteristics) can enhance human's likability. However, it is still unclear whether specific means and strategy of robot's nonverbal behaviours enhances likability from human with different personality traits and whether there is a relationship between robot's nonverbal behaviours and human's likability based on human's personality. In this study, we investigated and focused on the interaction via gaze and head nodding behaviours (mutual gaze convergence and head nodding synchrony) between introvert/extravert participants and robot in two communication strategies (Backchanneling and Turn-taking). Our findings reveal that the introvert participants are positively affected by backchanneling in robot's head nodding behaviour, which results in substantial head nodding synchrony whereas the extravert participants are positively influenced by turn-taking in gaze behaviour, which leads to significant mutual gaze convergence. This study demonstrates that there is a relationship between robot's nonverbal behaviour and human's likability based on human's personality.

  19. Human-Robot Interaction Directed Research Project

    Science.gov (United States)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of

  20. Pantomimic gestures for human-robot interaction

    CSIR Research Space (South Africa)

    Burke, Michael G

    2015-10-01

    Full Text Available -1 IEEE TRANSACTIONS ON ROBOTICS 1 Pantomimic Gestures for Human-Robot Interaction Michael Burke, Student Member, IEEE, and Joan Lasenby Abstract This work introduces a pantomimic gesture interface, which classifies human hand gestures using...

  1. Self-Reconfiguration Planning of Robot Embodiment for Inherent Safe Performance

    Science.gov (United States)

    Uchida, Masafumi; Nozawa, Akio; Asano, Hirotoshi; Onogaki, Hitoshi; Mizuno, Tota; Park, Young-Il; Ide, Hideto; Yokoyama, Shuichi

    In the situation in which a robot and a human work together by collaborating with each other, a robot and a human share one working environment, and each interferes in each other. In other ward, it is impossible to avoid the physical contact and the interaction of force between a robot and a human. The boundary of each complex dynamic occupation area changes in the connection movement which is the component of collaborative works at this time. The main restraint condition which relates to the robustness of that connection movement is each physical charactristics, that is, the embodiment. A robot body is variability though the embodiment of a human is almost fixed. Therefore, the safe and the robust connection movement is brought when a robot has the robot body which is well suitable for the embodiment of a human. A purpose for this research is that the colaboration works between the self-reconfiguration robot and a human is realized. To achieve this purpose, a self-reconfiguration algorithm based on some indexes to evaluate a robot body in the macroscopic point of view was examined on a modular robot system of the 2-D lattice structure. In this paper, it investigated effect specially that the object of learning of each individual was limited to the cooperative behavior between the adjoining modules toward the macroscopic evaluation index.

  2. Human-robot interaction strategies for walker-assisted locomotion

    CERN Document Server

    Cifuentes, Carlos A

    2016-01-01

    This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed. .

  3. Forming Human-Robot Teams Across Time and Space

    Science.gov (United States)

    Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.

    2012-01-01

    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot

  4. Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    Science.gov (United States)

    Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer

    2011-01-01

    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator

  5. Human exploration and settlement of Mars - The roles of humans and robots

    Science.gov (United States)

    Duke, Michael B.

    1991-01-01

    The scientific objectives and strategies for human settlement on Mars are examined in the context of the Space Exploration Initiative (SEI). An integrated strategy for humans and robots in the exploration and settlement of Mars is examined. Such an effort would feature robotic, telerobotic, and human-supervised robotic phases.

  6. Ethorobotics: A New Approach to Human-Robot Relationship

    Directory of Open Access Journals (Sweden)

    Ádám Miklósi

    2017-06-01

    Full Text Available Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human behaviors: ecology and ethology. We show how the paradox of the so called “uncanny valley hypothesis” can be solved by applying the “niche” concept to social robots, and relying on the natural behavior of humans. Instead of striving to build human-like social robots, engineers should construct robots that are able to maximize their performance in their niche (being optimal for some specific functions, and if they are endowed with appropriate form of social competence then humans will eventually interact with them independent of their embodiment. This new discipline, which we call ethorobotics, could change social robotics, giving a boost to new technical approaches and applications.

  7. Ethorobotics: A New Approach to Human-Robot Relationship

    Science.gov (United States)

    Miklósi, Ádám; Korondi, Péter; Matellán, Vicente; Gácsi, Márta

    2017-01-01

    Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human behaviors: ecology and ethology. We show how the paradox of the so called “uncanny valley hypothesis” can be solved by applying the “niche” concept to social robots, and relying on the natural behavior of humans. Instead of striving to build human-like social robots, engineers should construct robots that are able to maximize their performance in their niche (being optimal for some specific functions), and if they are endowed with appropriate form of social competence then humans will eventually interact with them independent of their embodiment. This new discipline, which we call ethorobotics, could change social robotics, giving a boost to new technical approaches and applications. PMID:28649213

  8. Human motion behavior while interacting with an industrial robot.

    Science.gov (United States)

    Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus

    2012-01-01

    Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.

  9. Presence of Life-Like Robot Expressions Influences Children’s Enjoyment of Human-Robot Interactions in the Field

    NARCIS (Netherlands)

    Cameron, David; Fernando, Samuel; Collins, Emily; Millings, Abigail; Moore, Roger; Sharkey, Amanda; Evers, Vanessa; Prescott, Tony

    Emotions, and emotional expression, have a broad influence on the interactions we have with others and are thus a key factor to consider in developing social robots. As part of a collaborative EU project, this study examined the impact of lifelike affective facial expressions, in the humanoid robot

  10. Robot assistant versus human or another robot assistant in patients undergoing laparoscopic cholecystectomy.

    Science.gov (United States)

    Gurusamy, Kurinchi Selvan; Samraj, Kumarakrishnan; Fusai, Giuseppe; Davidson, Brian R

    2012-09-12

    The role of a robotic assistant in laparoscopic cholecystectomy is controversial. While some trials have shown distinct advantages of a robotic assistant over a human assistant others have not, and it is unclear which robotic assistant is best. The aims of this review are to assess the benefits and harms of a robot assistant versus human assistant or versus another robot assistant in laparoscopic cholecystectomy, and to assess whether the robot can substitute the human assistant. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE, and Science Citation Index Expanded (until February 2012) for identifying the randomised clinical trials. Only randomised clinical trials (irrespective of language, blinding, or publication status) comparing robot assistants versus human assistants in laparoscopic cholecystectomy were considered for the review. Randomised clinical trials comparing different types of robot assistants were also considered for the review. Two authors independently identified the trials for inclusion and independently extracted the data. We calculated the risk ratio (RR) or mean difference (MD) with 95% confidence interval (CI) using the fixed-effect and the random-effects models based on intention-to-treat analysis, when possible, using Review Manager 5. We included six trials with 560 patients. One trial involving 129 patients did not state the number of patients randomised to the two groups. In the remaining five trials 431 patients were randomised, 212 to the robot assistant group and 219 to the human assistant group. All the trials were at high risk of bias. Mortality and morbidity were reported in only one trial with 40 patients. There was no mortality or morbidity in either group. Mortality and morbidity were not reported in the remaining trials. Quality of life or the proportion of patients who were discharged as day-patient laparoscopic cholecystectomy patients were not reported in any

  11. Smooth leader or sharp follower? Playing the mirror game with a robot.

    Science.gov (United States)

    Kashi, Shir; Levy-Tzedek, Shelly

    2018-01-01

    The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. We set out to test people's preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions.

  12. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Directory of Open Access Journals (Sweden)

    Addythia Saphala

    2015-10-01

    Full Text Available Robotic Operation System (ROS is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR, which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM.

  13. Robots and Humans in Planetary Exploration: Working Together?

    Science.gov (United States)

    Landis, Geoffrey A.; Lyons, Valerie (Technical Monitor)

    2002-01-01

    Today's approach to human-robotic cooperation in planetary exploration focuses on using robotic probes as precursors to human exploration. A large portion of current NASA planetary surface exploration is focussed on Mars, and robotic probes are seen as precursors to human exploration in: Learning about operation and mobility on Mars; Learning about the environment of Mars; Mapping the planet and selecting landing sites for human mission; Demonstration of critical technology; Manufacture fuel before human presence, and emplace elements of human-support infrastructure

  14. Approaching human performance the functionality-driven Awiwi robot hand

    CERN Document Server

    Grebenstein, Markus

    2014-01-01

    Humanoid robotics have made remarkable progress since the dawn of robotics. So why don't we have humanoid robot assistants in day-to-day life yet? This book analyzes the keys to building a successful humanoid robot for field robotics, where collisions become an unavoidable part of the game. The author argues that the design goal should be real anthropomorphism, as opposed to mere human-like appearance. He deduces three major characteristics to aim for when designing a humanoid robot, particularly robot hands: _ Robustness against impacts _ Fast dynamics _ Human-like grasping and manipulation performance   Instead of blindly copying human anatomy, this book opts for a holistic design me-tho-do-lo-gy. It analyzes human hands and existing robot hands to elucidate the important functionalities that are the building blocks toward these necessary characteristics.They are the keys to designing an anthropomorphic robot hand, as illustrated in the high performance anthropomorphic Awiwi Hand presented in this book.  ...

  15. A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

    Directory of Open Access Journals (Sweden)

    Juan A. Corrales

    2011-10-01

    Full Text Available Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach.

  16. Human - Robot Proximity

    DEFF Research Database (Denmark)

    Nickelsen, Niels Christian Mossfeldt

    The media and political/managerial levels focus on the opportunities to re-perform Denmark through digitization. Feeding assistive robotics is a welfare technology, relevant to citizens with low or no function in their arms. Despite national dissemination strategies, it proves difficult to recruit...... the study that took place as multi-sited ethnography at different locations in Denmark and Sweden. Based on desk research, observation of meals and interviews I examine socio-technological imaginaries and their practical implications. Human - robotics interaction demands engagement and understanding...

  17. Applications of artificial intelligence in safe human-robot interactions.

    Science.gov (United States)

    Najmaei, Nima; Kermani, Mehrdad R

    2011-04-01

    The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.

  18. The Architecture of Children's Use of Language and Tools When Problem Solving Collaboratively with Robotics

    Science.gov (United States)

    Mills, Kathy A.; Chandra, Vinesh; Park, Ji Yong

    2013-01-01

    This paper demonstrates, following Vygotsky, that language and tool use has a critical role in the collaborative problem-solving behaviour of school-age children. It reports original ethnographic classroom research examining the convergence of speech and practical activity in children's collaborative problem solving with robotics programming…

  19. Human-Robot Teamwork in USAR Environments: The TRADR Project

    NARCIS (Netherlands)

    Greeff, J. de; Hindriks, K.; Neerincx, M.A.; Kruijff-Korbayova, I.

    2015-01-01

    The TRADR project aims at developing methods and models for human-robot teamwork, enabling robots to operate in search and rescue environments alongside humans as teammates, rather than as tools. Through a user-centered cognitive engineering method, human-robot teamwork is analyzed, modeled,

  20. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    Directory of Open Access Journals (Sweden)

    Felipe Cid

    2014-04-01

    Full Text Available This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System, the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.

  1. When in Rome: the role of culture & context in adherence to robot recommendations

    NARCIS (Netherlands)

    Wang, L.; Rau, P.-L.P.; Evers, V.; Robinson, B.K.; Hinds, P.

    2010-01-01

    In this study, we sought to clarify the effects of users' cultural background and cultural context on human-robot team collaboration by investigating attitudes toward and the extent to which people changed their decisions based on the recommendations of a robot collaborator. We report the results of

  2. Robot Control Overview: An Industrial Perspective

    Directory of Open Access Journals (Sweden)

    T. Brogårdh

    2009-07-01

    Full Text Available One key competence for robot manufacturers is robot control, defined as all the technologies needed to control the electromechanical system of an industrial robot. By means of modeling, identification, optimization, and model-based control it is possible to reduce robot cost, increase robot performance, and solve requirements from new automation concepts and new application processes. Model-based control, including kinematics error compensation, optimal servo reference- and feed-forward generation, and servo design, tuning, and scheduling, has meant a breakthrough for the use of robots in industry. Relying on this breakthrough, new automation concepts such as high performance multi robot collaboration and human robot collaboration can be introduced. Robot manufacturers can build robots with more compliant components and mechanical structures without loosing performance and robots can be used also in applications with very high performance requirements, e.g., in assembly, machining, and laser cutting. In the future it is expected that the importance of sensor control will increase, both with respect to sensors in the robot structure to increase the control performance of the robot itself and sensors outside the robot related to the applications and the automation systems. In this connection sensor fusion and learning functionalities will be needed together with the robot control for easy and intuitive installation, programming, and maintenance of industrial robots.

  3. Robot learning from human teachers

    CERN Document Server

    Chernova, Sonia

    2014-01-01

    Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn f

  4. Safe physical human robot interaction- past, present and future

    International Nuclear Information System (INIS)

    Pervez, Aslam; Ryu, Jeha

    2008-01-01

    When a robot physically interacts with a human user, the requirements should be drastically changed. The most important requirement is the safety of the human user in the sense that robot should not harm the human in any situation. During the last few years, research has been focused on various aspects of safe physical human robot interaction. This paper provides a review of the work on safe physical interaction of robotic systems sharing their workspace with human users (especially elderly people). Three distinct areas of research are identified: interaction safety assessment, interaction safety through design, and interaction safety through planning and control. The paper then highlights the current challenges and available technologies and points out future research directions for realization of a safe and dependable robotic system for human users

  5. Next Generation Simulation Framework for Robotic and Human Space Missions

    Science.gov (United States)

    Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven

    2012-01-01

    The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.

  6. Measuring empathy for human and robot hand pain using electroencephalography.

    Science.gov (United States)

    Suzuki, Yutaka; Galli, Lisa; Ikeda, Ayaka; Itakura, Shoji; Kitazaki, Michiteru

    2015-11-03

    This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.

  7. Human Factors and Robotics: Current Status and Future Prospects.

    Science.gov (United States)

    Parsons, H. McIlvaine; Kearsley, Greg P.

    The principal human factors engineering issue in robotics is the division of labor between automation (robots) and human beings. This issue reflects a prime human factors engineering consideration in systems design--what equipment should do and what operators and maintainers should do. Understanding of capabilities and limitations of robots and…

  8. Remote radiation mapping and preliminary intervention using collaborating (European and Russian) mobile robots

    International Nuclear Information System (INIS)

    Piotrowski, L.; Trouville, B.; Halbach, M.; Sidorkin, N.

    1996-12-01

    The primary objective of the IMPACT project is to develop a light-weight and inexpensive mobile robot that can be used for rapid inspection missions within nuclear power plants. These interventions are to cover normal, incident and accident situations and aim at primary reconnaissance (or 'data collecting') missions. The IMPACT robot was demonstrated (April 1996) in a realistic mission at the Russian nuclear plant SMOLENSK. The demonstration, composed of 2 independent but consecutive missions, was held in a radioactive zone near turbine ≠ 4 of Unit 2: remote radiation mapping with localisation of radioactive sources by the IMPACT robot equipped with a (Russian) gamma-radiation sensor; deployment of a Russian intervention robot for the construction of a protective lead shield around one of the identified sources and verification that the ambient radiation level has been reduce. This mission was executed remotely by 2 mobile robots working in collaboration: a NIKIMT robot equipped with a manipulator arm and carrying leads bricks and the IMPACT robot of mission I (radiation measurements and 'side-observer'). This manuscript describes (a) the technical characteristics of the IMPACT reconnaissance robot (3-segmented, caterpillar-tracked body; 6 video cameras placed around the mobile platform with simultaneous presentation of up to 4 video images at the control post; ability to detach remotely one of the robot's segments (i.e. the robot can divide itself into 2 separate mobile robots)) and (b) the SMOLENSK demonstration. (author)

  9. Mobile app for human-interaction with sitter robots

    Science.gov (United States)

    Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.

    2017-05-01

    Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that

  10. From robot to human grasping simulation

    CERN Document Server

    León, Beatriz; Sancho-Bru, Joaquin

    2013-01-01

    The human hand and its dexterity in grasping and manipulating objects are some of the hallmarks of the human species. For years, anatomic and biomechanical studies have deepened the understanding of the human hand’s functioning and, in parallel, the robotics community has been working on the design of robotic hands capable of manipulating objects with a performance similar to that of the human hand. However, although many researchers have partially studied various aspects, to date there has been no comprehensive characterization of the human hand’s function for grasping and manipulation of

  11. Peer-to-Peer Human-Robot Interaction for Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2004-01-01

    NASA has embarked on a long-term program to develop human-robot systems for sustained, affordable space exploration. To support this mission, we are working to improve human-robot interaction and performance on planetary surfaces. Rather than building robots that function as glorified tools, our focus is to enable humans and robots to work as partners and peers. In this paper. we describe our approach, which includes contextual dialogue, cognitive modeling, and metrics-based field testing.

  12. Human centric object perception for service robots

    NARCIS (Netherlands)

    Alargarsamy Balasubramanian, A.C.

    2016-01-01

    The research interests and applicability of robotics have diversified and seen a
    tremendous growth in recent years. There has been a shift from industrial robots operating in constrained settings to consumer robots working in dynamic environments associated closely with everyday human

  13. Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human–Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Alireza Haji Fathaliyan

    2018-04-01

    Full Text Available Human–robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support human goals. Traditionally, gaze tracking approaches to action recognition have relied upon computer vision-based analyses of two-dimensional egocentric camera videos. The objective of this study was to identify useful features that can be extracted from three-dimensional (3D gaze behavior and used as inputs to machine learning algorithms for human action recognition. We investigated human gaze behavior and gaze–object interactions in 3D during the performance of a bimanual, instrumental activity of daily living: the preparation of a powdered drink. A marker-based motion capture system and binocular eye tracker were used to reconstruct 3D gaze vectors and their intersection with 3D point clouds of objects being manipulated. Statistical analyses of gaze fixation duration and saccade size suggested that some actions (pouring and stirring may require more visual attention than other actions (reach, pick up, set down, and move. 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information. The “gaze object sequence” was used to capture information about the identity of objects in concert with the temporal sequence in which the objects were visually regarded. Dynamic time warping barycentric averaging was used to create a population-based set of characteristic gaze object sequences that accounted for intra- and inter-subject variability. The gaze object sequence was used to demonstrate the feasibility of a simple action recognition algorithm that utilized a dynamic time warping Euclidean distance metric. Averaged over the six subtasks, the action recognition algorithm yielded an accuracy of 96.4%, precision of 89.5%, and recall of 89.2%. This level of performance suggests that

  14. Extending NGOMSL Model for Human-Humanoid Robot Interaction in the Soccer Robotics Domain

    Directory of Open Access Journals (Sweden)

    Rajesh Elara Mohan

    2008-01-01

    Full Text Available In the field of human-computer interaction, the Natural Goals, Operators, Methods, and Selection rules Language (NGOMSL model is one of the most popular methods for modelling knowledge and cognitive processes for rapid usability evaluation. The NGOMSL model is a description of the knowledge that a user must possess to operate the system represented as elementary actions for effective usability evaluations. In the last few years, mobile robots have been exhibiting a stronger presence in commercial markets and very little work has been done with NGOMSL modelling for usability evaluations in the human-robot interaction discipline. This paper focuses on extending the NGOMSL model for usability evaluation of human-humanoid robot interaction in the soccer robotics domain. The NGOMSL modelled human-humanoid interaction design of Robo-Erectus Junior was evaluated and the results of the experiments showed that the interaction design was able to find faults in an average time of 23.84 s. Also, the interaction design was able to detect the fault within the 60 s in 100% of the cases. The Evaluated Interaction design was adopted by our Robo-Erectus Junior version of humanoid robots in the RoboCup 2007 humanoid soccer league.

  15. Robot Tracking of Human Subjects in Field Environments

    Science.gov (United States)

    Graham, Jeffrey; Shillcutt, Kimberly

    2003-01-01

    Future planetary exploration will involve both humans and robots. Understanding and improving their interaction is a main focus of research in the Intelligent Systems Branch at NASA's Johnson Space Center. By teaming intelligent robots with astronauts on surface extra-vehicular activities (EVAs), safety and productivity can be improved. The EVA Robotic Assistant (ERA) project was established to study the issues of human-robot teams, to develop a testbed robot to assist space-suited humans in exploration tasks, and to experimentally determine the effectiveness of an EVA assistant robot. A companion paper discusses the ERA project in general, its history starting with ASRO (Astronaut-Rover project), and the results of recent field tests in Arizona. This paper focuses on one aspect of the research, robot tracking, in greater detail: the software architecture and algorithms. The ERA robot is capable of moving towards and/or continuously following mobile or stationary targets or sequences of targets. The contributions made by this research include how the low-level pose data is assembled, normalized and communicated, how the tracking algorithm was generalized and implemented, and qualitative performance reports from recent field tests.

  16. Human-like Compliance for Dexterous Robot Hands

    Science.gov (United States)

    Jau, Bruno M.

    1995-01-01

    This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications.

  17. Cognitive neuroscience robotics A synthetic approaches to human understanding

    CERN Document Server

    Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi

    2016-01-01

    Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume A describes how human cognitive functions can be replicated in artificial systems such as robots, and investigates how artificial systems could acquire intelligent behaviors through interaction with others and their environment.

  18. A new method to evaluate human-robot system performance

    Science.gov (United States)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  19. Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy.

    Science.gov (United States)

    Mohan, Mayumi; Mendonca, Rochelle; Johnson, Michelle J

    2017-07-01

    Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.

  20. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    Science.gov (United States)

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  1. Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.

    Science.gov (United States)

    Hongbo Wang; Kosuge, K

    2012-01-01

    Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.

  2. Group Tasks, Activities, Dynamics, and Interactions in Collaborative Robotics Projects with Elementary and Middle School Children

    Science.gov (United States)

    Yuen, Timothy T.; Boecking, Melanie; Stone, Jennifer; Tiger, Erin Price; Gomez, Alvaro; Guillen, Adrienne; Arreguin, Analisa

    2014-01-01

    Robotics provide the opportunity for students to bring their individual interests, perspectives and areas of expertise together in order to work collaboratively on real-world science, technology, engineering and mathematics (STEM) problems. This paper examines the nature of collaboration that manifests in groups of elementary and middle school…

  3. Sex Robots: Between Human and Artificial

    OpenAIRE

    Richardson, Kathleen

    2017-01-01

    Despite a surplus of human beings in the world, new estimates total 7 and a half billion, we appear to be at the start of an attachment crisis - a crisis in how human beings make intimate relationships. Enter the sex robots, built out of the bodies of sex dolls to help humans, particularly males escape their inability to connect. What does the rise of sex robots tell us about the way that women and girls are imagined, are they persons or property? And to what extent is porn, prostitution and ...

  4. Cognitive Tools for Humanoid Robots in Space

    National Research Council Canada - National Science Library

    Sofge, Donald; Perzanowski, Dennis; Skubic, Marjorie; Bugajska, Magdalena; Trafton, J. G; Cassimatis, Nicholas; Brock, Derek; Adams, William; Schultz, Alan

    2004-01-01

    ...) to collaborate with a human. The capabilities required of the robot include voice recognition, natural language understanding, gesture recognition, spatial reasoning, and cognitive modeling with perspective-taking...

  5. Adaptive Human-Aware Robot Navigation in Close Proximity to Humans

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2011-01-01

    For robots to be able coexist with people in future everyday human environments, they must be able to act in a safe, natural and comfortable way. This work addresses the motion of a mobile robot in an environment, where humans potentially want to interact with it. The designed system consists...... system that uses a potential field to derive motion that respects the personʹs social zones and perceived interest in interaction. The operation of the system is evaluated in a controlled scenario in an open hall environment. It is demonstrated that the robot is able to learn to estimate if a person...... wishes to interact, and that the system is capable of adapting to changing behaviours of the humans in the environment....

  6. Toward understanding social cues and signals in human?robot interaction: effects of robot gaze and proxemic behavior

    OpenAIRE

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relatio...

  7. The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.

    Science.gov (United States)

    Arnold, Thomas; Scheutz, Matthias

    2017-06-01

    Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.

  8. Interaction Challenges in Human-Robot Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2005-01-01

    In January 2004, NASA established a new, long-term exploration program to fulfill the President's Vision for U.S. Space Exploration. The primary goal of this program is to establish a sustained human presence in space, beginning with robotic missions to the Moon in 2008, followed by extended human expeditions to the Moon as early as 2015. In addition, the program places significant emphasis on the development of joint human-robot systems. A key difference from previous exploration efforts is that future space exploration activities must be sustainable over the long-term. Experience with the space station has shown that cost pressures will keep astronaut teams small. Consequently, care must be taken to extend the effectiveness of these astronauts well beyond their individual human capacity. Thus, in order to reduce human workload, costs, and fatigue-driven error and risk, intelligent robots will have to be an integral part of mission design.

  9. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots.

    Science.gov (United States)

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.

  10. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  11. A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    OpenAIRE

    Mavridis, Nikolaos

    2014-01-01

    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-lookin...

  12. Using Empathy to Improve Human-Robot Relationships

    Science.gov (United States)

    Pereira, André; Leite, Iolanda; Mascarenhas, Samuel; Martinho, Carlos; Paiva, Ana

    For robots to become our personal companions in the future, they need to know how to socially interact with us. One defining characteristic of human social behaviour is empathy. In this paper, we present a robot that acts as a social companion expressing different kinds of empathic behaviours through its facial expressions and utterances. The robot comments the moves of two subjects playing a chess game against each other, being empathic to one of them and neutral towards the other. The results of a pilot study suggest that users to whom the robot was empathic perceived the robot more as a friend.

  13. The human hand as an inspiration for robot hand development

    CERN Document Server

    Santos, Veronica

    2014-01-01

    “The Human Hand as an Inspiration for Robot Hand Development” presents an edited collection of authoritative contributions in the area of robot hands. The results described in the volume are expected to lead to more robust, dependable, and inexpensive distributed systems such as those endowed with complex and advanced sensing, actuation, computation, and communication capabilities. The twenty-four chapters discuss the field of robotic grasping and manipulation viewed in light of the human hand’s capabilities and push the state-of-the-art in robot hand design and control. Topics discussed include human hand biomechanics, neural control, sensory feedback and perception, and robotic grasp and manipulation. This book will be useful for researchers from diverse areas such as robotics, biomechanics, neuroscience, and anthropologists.

  14. Investigation of the Impedance Characteristic of Human Arm for Development of Robots to Cooperate with Humans

    Science.gov (United States)

    Rahman, Md. Mozasser; Ikeura, Ryojun; Mizutani, Kazuki

    In the near future many aspects of our lives will be encompassed by tasks performed in cooperation with robots. The application of robots in home automation, agricultural production and medical operations etc. will be indispensable. As a result robots need to be made human-friendly and to execute tasks in cooperation with humans. Control systems for such robots should be designed to work imitating human characteristics. In this study, we have tried to achieve these goals by means of controlling a simple one degree-of-freedom cooperative robot. Firstly, the impedance characteristic of the human arm in a cooperative task is investigated. Then, this characteristic is implemented to control a robot in order to perform cooperative task with humans. A human followed the motion of an object, which is moved through desired trajectories. The motion is actuated by the linear motor of the one degree-of-freedom robot system. Trajectories used in the experiments of this method were minimum jerk (the rate of change of acceleration) trajectory, which was found during human and human cooperative task and optimum for muscle movement. As the muscle is mechanically analogous to a spring-damper system, a simple second-order equation is used as models for the arm dynamics. In the model, we considered mass, stiffness and damping factor. Impedance parameter is calculated from the position and force data obtained from the experiments and based on the “Estimation of Parametric Model”. Investigated impedance characteristic of human arm is then implemented to control a robot, which performed cooperative task with human. It is observed that the proposed control methodology has given human like movements to the robot for cooperating with human.

  15. Interaction debugging : an integral approach to analyze human-robot interaction

    NARCIS (Netherlands)

    Kooijmans, T.; Kanda, T.; Bartneck, C.; Ishiguro, H.; Hagita, N.

    2006-01-01

    Along with the development of interactive robots, controlled experiments and field trials are regularly conducted to stage human-robot interaction. Experience in this field has shown that analyzing human-robot interaction for evaluation purposes fosters the development of improved systems and the

  16. Robots as Imagined in the Television Series Humans.

    Science.gov (United States)

    Wicclair, Mark R

    2018-07-01

    Humans is a science fiction television series set in what appears to be present-day London. What makes it science fiction is that in London and worldwide, there are robots that look like humans and can mimic human behavior. The series raises several important ethical and philosophical questions about artificial intelligence and robotics, which should be of interest to bioethicists.

  17. A Review of Extra-Terrestrial Mining Robot Concepts

    Science.gov (United States)

    Mueller, Robert P.; Van Susante, Paul J.

    2011-01-01

    Outer space contains a vast amount of resources that offer virtually unlimited wealth to the humans that can access and use them for commercial purposes. One of the key technologies for harvesting these resources is robotic mining of regolith, minerals, ices and metals. The harsh environment and vast distances create challenges that are handled best by robotic machines working in collaboration with human explorers. Humans will benefit from the resources that will be mined by robots. They will visit outposts and mining camps as required for exploration, commerce and scientific research, but a continuous presence is most likely to be provided by robotic mining machines that are remotely controlled by humans. There have been a variety of extra-terrestrial robotic mining concepts proposed over the last 100 years and this paper will attempt to summarize and review concepts in the public domain (government, industry and academia) to serve as an informational resource for future mining robot developers and operators. The challenges associated with these concepts will be discussed and feasibility will be assessed. Future needs associated with commercial efforts will also be investigated.

  18. Towards understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior

    Directory of Open Access Journals (Sweden)

    Stephen M. Fiore

    2013-11-01

    Full Text Available As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI. We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  19. Robot and Human Surface Operations on Solar System Bodies

    Science.gov (United States)

    Weisbin, C. R.; Easter, R.; Rodriguez, G.

    2001-01-01

    This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.

  20. Human-Like Room Segmentation for Domestic Cleaning Robots

    Directory of Open Access Journals (Sweden)

    David Fleer

    2017-11-01

    Full Text Available Autonomous mobile robots have recently become a popular solution for automating cleaning tasks. In one application, the robot cleans a floor space by traversing and covering it completely. While fulfilling its task, such a robot may create a map of its surroundings. For domestic indoor environments, these maps often consist of rooms connected by passageways. Segmenting the map into these rooms has several uses, such as hierarchical planning of cleaning runs by the robot, or the definition of cleaning plans by the user. Especially in the latter application, the robot-generated room segmentation should match the human understanding of rooms. Here, we present a novel method that solves this problem for the graph of a topo-metric map: first, a classifier identifies those graph edges that cross a border between rooms. This classifier utilizes data from multiple robot sensors, such as obstacle measurements and camera images. Next, we attempt to segment the map at these room–border edges using graph clustering. By training the classifier on user-annotated data, this produces a human-like room segmentation. We optimize and test our method on numerous realistic maps generated by our cleaning-robot prototype and its simulated version. Overall, we find that our method produces more human-like room segmentations compared to mere graph clustering. However, unusual room borders that differ from the training data remain a challenge.

  1. Human-robot interaction assessment using dynamic engagement profiles

    DEFF Research Database (Denmark)

    Drimus, Alin; Poltorak, Nicole

    2017-01-01

    -1] interval, where 0 represents disengaged and 1 fully engaged. The network shows a good accuracy at recognizing the engagement state of humans given positive emotions. A time based analysis of interaction experiments between small humanoid robots and humans provides time series of engagement estimates, which...... and is applicable to humanoid robotics as well as other related contexts.......This paper addresses the use of convolutional neural networks for image analysis resulting in an engagement metric that can be used to assess the quality of human robot interactions. We propose a method based on a pretrained convolutional network able to map emotions onto a continuous [0...

  2. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Science.gov (United States)

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  3. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Directory of Open Access Journals (Sweden)

    Joachim de Greeff

    Full Text Available Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference; the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  4. Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.

    Science.gov (United States)

    Qureshi, Ahmed Hussain; Nakamura, Yutaka; Yoshikawa, Yuichiro; Ishiguro, Hiroshi

    2018-03-26

    For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Artificial companions: empathy and vulnerability mirroring in human-robot relations

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2010-01-01

    Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends (among other things) on the robot’s capacity to be a recipient of human empathy, and

  6. Modeling Mixed Groups of Humans and Robots with Reflexive Game Theory

    Science.gov (United States)

    Tarasenko, Sergey

    The Reflexive Game Theory is based on decision-making principles similar to the ones used by humans. This theory considers groups of subjects and allows to predict which action from the set each subject in the group will choose. It is possible to influence subject's decision in a way that he will make a particular choice. The purpose of this study is to illustrate how robots can refrain humans from risky actions. To determine the risky actions, the Asimov's Three Laws of robotics are employed. By fusing the RGT's power to convince humans on the mental level with Asimov's Laws' safety, we illustrate how robots in the mixed groups of humans and robots can influence on human subjects in order to refrain humans from risky actions. We suggest that this fusion has a potential to device human-like motor behaving and looking robots with the human-like decision-making algorithms.

  7. Cognitive neuroscience robotics B analytic approaches to human understanding

    CERN Document Server

    Ishiguro, Hiroshi; Asada, Minoru; Osaka, Mariko; Fujikado, Takashi

    2016-01-01

    Cognitive Neuroscience Robotics is the first introductory book on this new interdisciplinary area. This book consists of two volumes, the first of which, Synthetic Approaches to Human Understanding, advances human understanding from a robotics or engineering point of view. The second, Analytic Approaches to Human Understanding, addresses related subjects in cognitive science and neuroscience. These two volumes are intended to complement each other in order to more comprehensively investigate human cognitive functions, to develop human-friendly information and robot technology (IRT) systems, and to understand what kind of beings we humans are. Volume B describes to what extent cognitive science and neuroscience have revealed the underlying mechanism of human cognition, and investigates how development of neural engineering and advances in other disciplines could lead to deep understanding of human cognition.

  8. Towards multi-platform software architecture for Collaborative Teleoperation

    Science.gov (United States)

    Domingues, Christophe; Otmane, Samir; Davesne, Frederic; Mallem, Malik

    2009-03-01

    Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.

  9. Towards multi-platform software architecture for Collaborative Teleoperation

    International Nuclear Information System (INIS)

    Domingues, Christophe; Otmane, Samir; Davesne, Frederic; Mallem, Malik

    2009-01-01

    Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.

  10. Human guidance of mobile robots in complex 3D environments using smart glasses

    Science.gov (United States)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  11. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot

    Directory of Open Access Journals (Sweden)

    Nicole Mirnig

    2017-05-01

    Full Text Available We conducted a user study for which we purposefully programmed faulty behavior into a robot’s routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human–robot interaction. In contrast to our previous work, where we studied video material in which robot errors occurred unintentionally, in the herein reported user study, we purposefully elicited robot errors to further explore the human interaction partners’ social signals following a robot error. Our participants interacted with a human-like NAO, and the robot either performed faulty or free from error. First, the robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. After the interaction, we asked the participants to rate the robot’s anthropomorphism, likability, and perceived intelligence. We also interviewed the participants on their opinion about the interaction. Additionally, we video-coded the social signals the participants showed during their interaction with the robot as well as the answers they provided the robot with. Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly. We did not find significant differences in people’s ratings of the robot’s anthropomorphism and perceived intelligence. The qualitative data confirmed the questionnaire results in showing that although the participants recognized the robot’s mistakes, they did not necessarily reject the erroneous robot. The annotations of the video data further showed that gaze shifts (e.g., from an object to the robot or vice versa and laughter are typical reactions to unexpected robot behavior

  12. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot.

    Science.gov (United States)

    Alexandrov, Alexei V; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free , scalar equations. This paper investigates whether the EM alternative shows "real-world robustness" against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive ("voluntary") movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  13. 24th International Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    2016-01-01

    This volume includes the Proceedings of the 24th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2015, which was held in Bucharest, Romania, on May 27-29, 2015. The Conference brought together academic and industry researchers in robotics from the 11 countries affiliated to the Alpe-Adria-Danube space: Austria, Croatia, Czech Republic, Germany, Greece, Hungary, Italy, Romania, Serbia, Slovakia and Slovenia, and their worldwide partners. According to its tradition, RAAD 2015 covered all important areas of research, development and innovation in robotics, including new trends such as: bio-inspired and cognitive robots, visual servoing of robot motion, human-robot interaction, and personal robots for ambient assisted living. The accepted papers have been grouped in nine sessions: Robot integration in industrial applications; Grasping analysis, dexterous grippers and component design; Advanced robot motion control; Robot vision and sensory control; Human-robot interaction and collaboration;...

  14. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    OpenAIRE

    BIROUAS Flaviu Ionut; NILGESZ Arnold

    2016-01-01

    This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  15. Robotics-based synthesis of human motion

    KAUST Repository

    Khatib, O.; Demircan, E.; De Sapio, V.; Sentis, L.; Besier, T.; Delp, S.

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  16. Robotics-based synthesis of human motion

    KAUST Repository

    Khatib, O.

    2009-05-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  17. The New Robotics-towards human-centered machines.

    Science.gov (United States)

    Schaal, Stefan

    2007-07-01

    Research in robotics has moved away from its primary focus on industrial applications. The New Robotics is a vision that has been developed in past years by our own university and many other national and international research institutions and addresses how increasingly more human-like robots can live among us and take over tasks where our current society has shortcomings. Elder care, physical therapy, child education, search and rescue, and general assistance in daily life situations are some of the examples that will benefit from the New Robotics in the near future. With these goals in mind, research for the New Robotics has to embrace a broad interdisciplinary approach, ranging from traditional mathematical issues of robotics to novel issues in psychology, neuroscience, and ethics. This paper outlines some of the important research problems that will need to be resolved to make the New Robotics a reality.

  18. Robotic Billiards: Understanding Humans in Order to Counter Them.

    Science.gov (United States)

    Nierhoff, Thomas; Leibrandt, Konrad; Lorenz, Tamara; Hirche, Sandra

    2016-08-01

    Ongoing technological advances in the areas of computation, sensing, and mechatronics enable robotic-based systems to interact with humans in the real world. To succeed against a human in a competitive scenario, a robot must anticipate the human behavior and include it in its own planning framework. Then it can predict the next human move and counter it accordingly, thus not only achieving overall better performance but also systematically exploiting the opponent's weak spots. Pool is used as a representative scenario to derive a model-based planning and control framework where not only the physics of the environment but also a model of the opponent is considered. By representing the game of pool as a Markov decision process and incorporating a model of the human decision-making based on studies, an optimized policy is derived. This enables the robot to include the opponent's typical game style into its tactical considerations when planning a stroke. The results are validated in simulations and real-life experiments with an anthropomorphic robot playing pool against a human.

  19. Optimal Modality Selection for Cooperative Human-Robot Task Completion.

    Science.gov (United States)

    Jacob, Mithun George; Wachs, Juan P

    2016-12-01

    Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p human-robot collision) and the differences in the lexicons are analyzed.

  20. Simplified Human-Robot Interaction: Modeling and Evaluation

    Directory of Open Access Journals (Sweden)

    Balazs Daniel

    2013-10-01

    Full Text Available In this paper a novel concept of human-robot interaction (HRI modeling is proposed. Including factors like trust in automation, situational awareness, expertise and expectations a new user experience framework is formed for industrial robots. Service Oriented Robot Operation, proposed in a previous paper, creates an abstract level in HRI and it is also included in the framework. This concept is evaluated with exhaustive tests. Results prove that significant improvement in task execution may be achieved and the new system is more usable for operators with less experience with robotics; personnel specific for small and medium enterprises (SMEs.

  1. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Thomas Mergner

    2017-04-01

    Full Text Available Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary” movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  2. HUMAN HAND STUDY FOR ROBOTIC EXOSKELETON DELVELOPMENT

    Directory of Open Access Journals (Sweden)

    BIROUAS Flaviu Ionut

    2016-11-01

    Full Text Available This paper will be presenting research with application in the rehabilitation of hand motor functions by the aid of robotics. The focus will be on the dimensional parameters of the biological human hand from which the robotic system will be developed. The term used for such measurements is known as anthropometrics. The anthropometric parameters studied and presented in this paper are mainly related to the angular limitations of the finger joints of the human hand.

  3. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  4. Advancing the Strategic Messages Affecting Robot Trust Effect: The Dynamic of User- and Robot-Generated Content on Human-Robot Trust and Interaction Outcomes.

    Science.gov (United States)

    Liang, Yuhua Jake; Lee, Seungcheol Austin

    2016-09-01

    Human-robot interaction (HRI) will soon transform and shift the communication landscape such that people exchange messages with robots. However, successful HRI requires people to trust robots, and, in turn, the trust affects the interaction. Although prior research has examined the determinants of human-robot trust (HRT) during HRI, no research has examined the messages that people received before interacting with robots and their effect on HRT. We conceptualize these messages as SMART (Strategic Messages Affecting Robot Trust). Moreover, we posit that SMART can ultimately affect actual HRI outcomes (i.e., robot evaluations, robot credibility, participant mood) by affording the persuasive influences from user-generated content (UGC) on participatory Web sites. In Study 1, participants were assigned to one of two conditions (UGC/control) in an original experiment of HRT. Compared with the control (descriptive information only), results showed that UGC moderated the correlation between HRT and interaction outcomes in a positive direction (average Δr = +0.39) for robots as media and robots as tools. In Study 2, we explored the effect of robot-generated content but did not find similar moderation effects. These findings point to an important empirical potential to employ SMART in future robot deployment.

  5. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction

    Science.gov (United States)

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children’s social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a “mental model” of the robot, tailoring the tutoring to the robot’s performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot’s bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance. PMID:26422143

  6. ROBOT LEARNING OF OBJECT MANIPULATION TASK ACTIONS FROM HUMAN DEMONSTRATIONS

    Directory of Open Access Journals (Sweden)

    Maria Kyrarini

    2017-08-01

    Full Text Available Robot learning from demonstration is a method which enables robots to learn in a similar way as humans. In this paper, a framework that enables robots to learn from multiple human demonstrations via kinesthetic teaching is presented. The subject of learning is a high-level sequence of actions, as well as the low-level trajectories necessary to be followed by the robot to perform the object manipulation task. The multiple human demonstrations are recorded and only the most similar demonstrations are selected for robot learning. The high-level learning module identifies the sequence of actions of the demonstrated task. Using Dynamic Time Warping (DTW and Gaussian Mixture Model (GMM, the model of demonstrated trajectories is learned. The learned trajectory is generated by Gaussian mixture regression (GMR from the learned Gaussian mixture model.  In online working phase, the sequence of actions is identified and experimental results show that the robot performs the learned task successfully.

  7. Dynamic perceptions of human-likeness while interacting with a social robot

    NARCIS (Netherlands)

    Ruijten, P.A.M.; Cuijpers, R.H.

    2017-01-01

    In human-robot interaction research, much attention is given to the development of socially assistive robots that can have natural interactions with their users. One crucial aspect of such natural interactions is that the robot is perceived as human-like. Much research already exists that

  8. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    Science.gov (United States)

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  9. Human-like robots for space and hazardous environments

    Science.gov (United States)

    1994-01-01

    The three year goal for the Kansas State USRA/NASA Senior Design team is to design and build a walking autonomous robotic rover. The rover should be capable of crossing rough terrain, traversing human made obstacles (such as stairs and doors), and moving through human and robot occupied spaces without collision. The rover is also to evidence considerable decision making ability, navigation, and path planning skills.

  10. Human Exploration using Real-Time Robotic Operations (HERRO): A space exploration strategy for the 21st century

    Science.gov (United States)

    Schmidt, George R.; Landis, Geoffrey A.; Oleson, Steven R.

    2012-11-01

    This paper presents an exploration strategy for human missions beyond Low Earth Orbit (LEO) and the Moon that combines the best features of human and robotic spaceflight. This "Human Exploration using Real-time Robotic Operations" (HERRO) strategy refrains from placing humans on the surfaces of the Moon and Mars in the near-term. Rather, it focuses on sending piloted spacecraft and crews into orbit around Mars and other exploration targets of interest, and conducting astronaut exploration of the surfaces using telerobots and remotely-controlled systems. By eliminating the significant communications delay or "latency" with Earth due to the speed of light limit, teleoperation provides scientists real-time control of rovers and other sophisticated instruments. This in effect gives them a "virtual presence" on planetary surfaces, and thus expands the scientific return at these destinations. HERRO mitigates several of the major issues that have hindered the progress of human spaceflight beyond Low Earth Orbit (LEO) by: (1) broadening the range of destinations for near-term human missions; (2) reducing cost and risk through less complexity and fewer man-rated elements; (3) offering benefits of human-equivalent in-situ cognition, decision-making and field-work on planetary bodies; (4) providing a simpler approach to returning samples from Mars and planetary surfaces; and (5) facilitating opportunities for international collaboration through contribution of diverse robotic systems. HERRO provides a firm justification for human spaceflight—one that expands the near-term capabilities of scientific exploration while providing the space transportation infrastructure needed for eventual human landings in the future.

  11. Task Refinement for Autonomous Robots using Complementary Corrective Human Feedback

    Directory of Open Access Journals (Sweden)

    Cetin Mericli

    2011-06-01

    Full Text Available A robot can perform a given task through a policy that maps its sensed state to appropriate actions. We assume that a hand-coded controller can achieve such a mapping only for the basic cases of the task. Refining the controller becomes harder and gets more tedious and error prone as the complexity of the task increases. In this paper, we present a new learning from demonstration approach to improve the robot's performance through the use of corrective human feedback as a complement to an existing hand-coded algorithm. The human teacher observes the robot as it performs the task using the hand-coded algorithm and takes over the control to correct the behavior when the robot selects a wrong action to be executed. Corrections are captured as new state-action pairs and the default controller output is replaced by the demonstrated corrections during autonomous execution when the current state of the robot is decided to be similar to a previously corrected state in the correction database. The proposed approach is applied to a complex ball dribbling task performed against stationary defender robots in a robot soccer scenario, where physical Aldebaran Nao humanoid robots are used. The results of our experiments show an improvement in the robot's performance when the default hand-coded controller is augmented with corrective human demonstration.

  12. Human-Robot Site Survey and Sampling for Space Exploration

    Science.gov (United States)

    Fong, Terrence; Bualat, Maria; Edwards, Laurence; Flueckiger, Lorenzo; Kunz, Clayton; Lee, Susan Y.; Park, Eric; To, Vinh; Utz, Hans; Ackner, Nir

    2006-01-01

    NASA is planning to send humans and robots back to the Moon before 2020. In order for extended missions to be productive, high quality maps of lunar terrain and resources are required. Although orbital images can provide much information, many features (local topography, resources, etc) will have to be characterized directly on the surface. To address this need, we are developing a system to perform site survey and sampling. The system includes multiple robots and humans operating in a variety of team configurations, coordinated via peer-to-peer human-robot interaction. In this paper, we present our system design and describe planned field tests.

  13. Investigation of human-robot interface performance in household environments

    Science.gov (United States)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  14. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    Science.gov (United States)

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  15. [Human-robot global Simulink modeling and analysis for an end-effector upper limb rehabilitation robot].

    Science.gov (United States)

    Liu, Yali; Ji, Linhong

    2018-02-01

    Robot rehabilitation has been a primary therapy method for the urgent rehabilitation demands of paralyzed patients after a stroke. The parameters in rehabilitation training such as the range of the training, which should be adjustable according to each participant's functional ability, are the key factors influencing the effectiveness of rehabilitation therapy. Therapists design rehabilitation projects based on the semiquantitative functional assessment scales and their experience. But these therapies based on therapists' experience cannot be implemented in robot rehabilitation therapy. This paper modeled the global human-robot by Simulink in order to analyze the relationship between the parameters in robot rehabilitation therapy and the patients' movement functional abilities. We compared the shoulder and elbow angles calculated by simulation with the angles recorded by motion capture system while the healthy subjects completed the simulated action. Results showed there was a remarkable correlation between the simulation data and the experiment data, which verified the validity of the human-robot global Simulink model. Besides, the relationship between the circle radius in the drawing tasks in robot rehabilitation training and the active movement degrees of shoulder as well as elbow was also matched by a linear, which also had a remarkable fitting coefficient. The matched linear can be a quantitative reference for the robot rehabilitation training parameters.

  16. Modeling and Simulation for Exploring Human-Robot Team Interaction Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dudenhoeffer, Donald Dean; Bruemmer, David Jonathon; Davis, Midge Lee

    2001-12-01

    Small-sized and micro-robots will soon be available for deployment in large-scale forces. Consequently, the ability of a human operator to coordinate and interact with largescale robotic forces is of great interest. This paper describes the ways in which modeling and simulation have been used to explore new possibilities for human-robot interaction. The paper also discusses how these explorations have fed implementation of a unified set of command and control concepts for robotic force deployment. Modeling and simulation can play a major role in fielding robot teams in actual missions. While live testing is preferred, limitations in terms of technology, cost, and time often prohibit extensive experimentation with physical multi-robot systems. Simulation provides insight, focuses efforts, eliminates large areas of the possible solution space, and increases the quality of actual testing.

  17. A meta-analysis of factors affecting trust in human-robot interaction.

    Science.gov (United States)

    Hancock, Peter A; Billings, Deborah R; Schaefer, Kristin E; Chen, Jessie Y C; de Visser, Ewart J; Parasuraman, Raja

    2011-10-01

    We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.

  18. Human-Robot Teams Informed by Human Performance Moderator Functions

    Science.gov (United States)

    2012-08-29

    performance factors that affect the ability of a human to drive at night, which includes the eyesight of the driver, the fatigue level of the driver...where human factors are factors that affect the performance of an individual. 7 for human interaction. For instance, they explain the various human... affecting trust in human-robot interaction. Human Factors 53(5), 517-527 (2001) 35. Hart, S. G. and Staveland, L. E. Development of NASA-TLX (Task

  19. Human-robot skills transfer interfaces for a flexible surgical robot.

    Science.gov (United States)

    Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G

    2014-09-01

    In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Trajectory Planning for Robots in Dynamic Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Bak, Thomas; Andersen, Hans Jørgen

    2010-01-01

    This paper present a trajectory planning algorithm for a robot operating in dynamic human environments. Environments such as pedestrian streets, hospital corridors and train stations. We formulate the problem as planning a minimal cost trajectory through a potential field, defined from...... is enhanced to direct the search and account for the kinodynamic robot constraints. Compared to standard RRT, the algorithm proposed here find the robot control input that will drive the robot towards a new sampled point in the configuration space. The effect of the input is simulated, to add a reachable...

  1. New trends in medical and service robots human centered analysis, control and design

    CERN Document Server

    Chevallereau, Christine; Pisla, Doina; Bleuler, Hannes; Rodić, Aleksandar

    2016-01-01

    Medical and service robotics integrates several disciplines and technologies such as mechanisms, mechatronics, biomechanics, humanoid robotics, exoskeletons, and anthropomorphic hands. This book presents the most recent advances in medical and service robotics, with a stress on human aspects. It collects the selected peer-reviewed papers of the Fourth International Workshop on Medical and Service Robots, held in Nantes, France in 2015, covering topics on: exoskeletons, anthropomorphic hands, therapeutic robots and rehabilitation, cognitive robots, humanoid and service robots, assistive robots and elderly assistance, surgical robots, human-robot interfaces, BMI and BCI, haptic devices and design for medical and assistive robotics. This book offers a valuable addition to existing literature.

  2. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  3. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    Science.gov (United States)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  4. HUMAN FOLLOWING ON ROS FRAMEWORK A MOBILE ROBOT

    Directory of Open Access Journals (Sweden)

    Gigih Priyandoko

    2018-06-01

    Full Text Available Service mobile robot is playing a more critical role in today's society as more people such as a disabled person or the elderly are in need of mobile robot assistance. An autonomous person following ability shows great importance to the overall role of service mobile robot in assisting human. The objective of this paper focuses on developing a robot follow a person. The robot is equipped with the necessary sensors such as a Microsoft Kinect sensor and a Hokuyo laser sensor. Four suitable tracking methods are introduced in this project which is implemented and tested on the person following algorithm. The tracking methods implemented are face detection, leg detection, color detection and person blob detection. All of the algorithms implementations in this project is performed using Robot Operating System (ROS. The result showed that the mobile robot could track and follow the target person based on the person movement.

  5. Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard

    The field of collaborative industrial robots is currently developing fast both in the industry and in the scientific community. Companies such as Rethink Robotics and Universal Robots are redefining the concept of an industrial robot and entire new markets and use cases are becoming relevant for ...

  6. 25th Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    Borangiu, Theodor

    2017-01-01

    This book presents the proceedings of the 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 held in Belgrade, Serbia, on June 30th–July 2nd, 2016. In keeping with the tradition of the event, RAAD 2016 covered all the important areas of research and innovation in new robot designs and intelligent robot control, with papers including Intelligent robot motion control; Robot vision and sensory processing; Novel design of robot manipulators and grippers; Robot applications in manufacturing and services; Autonomous systems, humanoid and walking robots; Human–robot interaction and collaboration; Cognitive robots and emotional intelligence; Medical, human-assistive robots and prosthetic design; Robots in construction and arts, and Evolution, education, legal and social issues of robotics. For the first time in RAAD history, the themes cloud robots, legal and ethical issues in robotics as well as robots in arts were included in the technical program. The book is a valuable resource f...

  7. Evolving robot empathy towards humans with motor disabilities through artificial pain generation

    Directory of Open Access Journals (Sweden)

    Muh Anshar

    2018-01-01

    Full Text Available In contact assistive robots, a prolonged physical engagement between robots and humans with motor disabilities due to shoulder injuries, for instance, may at times lead humans to experience pain. In this situation, robots will require sophisticated capabilities, such as the ability to recognize human pain in advance and generate counter-responses as follow up emphatic action. Hence, it is important for robots to acquire an appropriate pain concept that allows them to develop these capabilities. This paper conceptualizes empathy generation through the realization of synthetic pain classes integrated into a robot’s self-awareness framework, and the implementation of fault detection on the robot body serves as a primary source of pain activation. Projection of human shoulder motion into the robot arm motion acts as a fusion process, which is used as a medium to gather information for analyses then to generate corresponding synthetic pain and emphatic responses. An experiment is designed to mirror a human peer’s shoulder motion into an observer robot. The results demonstrate that the fusion takes place accurately whenever unified internal states are achieved, allowing accurate classification of synthetic pain categories and generation of empathy responses in a timely fashion. Future works will consider a pain activation mechanism development.

  8. An Exoskeleton Robot for Human Forearm and Wrist Motion Assist

    Science.gov (United States)

    Ranathunga Arachchilage Ruwan Chandra Gopura; Kiguchi, Kazuo

    The exoskeleton robot is worn by the human operator as an orthotic device. Its joints and links correspond to those of the human body. The same system operated in different modes can be used for different fundamental applications; a human-amplifier, haptic interface, rehabilitation device and assistive device sharing a portion of the external load with the operator. We have been developing exoskeleton robots for assisting the motion of physically weak individuals such as elderly or slightly disabled in daily life. In this paper, we propose a three degree of freedom (3DOF) exoskeleton robot (W-EXOS) for the forearm pronation/ supination motion, wrist flexion/extension motion and ulnar/radial deviation. The paper describes the wrist anatomy toward the development of the exoskeleton robot, the hardware design of the exoskeleton robot and EMG-based control method. The skin surface electromyographic (EMG) signals of muscles in forearm of the exoskeletons' user and the hand force/forearm torque are used as input information for the controller. By applying the skin surface EMG signals as main input signals to the controller, automatic control of the robot can be realized without manipulating any other equipment. Fuzzy control method has been applied to realize the natural and flexible motion assist. Experiments have been performed to evaluate the proposed exoskeleton robot and its control method.

  9. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  10. Physical human-robot interaction of an active pelvis orthosis: toward ergonomic assessment of wearable robots.

    Science.gov (United States)

    d'Elia, Nicolò; Vanetti, Federica; Cempini, Marco; Pasquini, Guido; Parri, Andrea; Rabuffetti, Marco; Ferrarin, Maurizio; Molino Lova, Raffaele; Vitiello, Nicola

    2017-04-14

    In human-centered robotics, exoskeletons are becoming relevant for addressing needs in the healthcare and industrial domains. Owing to their close interaction with the user, the safety and ergonomics of these systems are critical design features that require systematic evaluation methodologies. Proper transfer of mechanical power requires optimal tuning of the kinematic coupling between the robotic and anatomical joint rotation axes. We present the methods and results of an experimental evaluation of the physical interaction with an active pelvis orthosis (APO). This device was designed to effectively assist in hip flexion-extension during locomotion with a minimum impact on the physiological human kinematics, owing to a set of passive degrees of freedom for self-alignment of the human and robotic hip flexion-extension axes. Five healthy volunteers walked on a treadmill at different speeds without and with the APO under different levels of assistance. The user-APO physical interaction was evaluated in terms of: (i) the deviation of human lower-limb joint kinematics when wearing the APO with respect to the physiological behavior (i.e., without the APO); (ii) relative displacements between the APO orthotic shells and the corresponding body segments; and (iii) the discrepancy between the kinematics of the APO and the wearer's hip joints. The results show: (i) negligible interference of the APO in human kinematics under all the experimented conditions; (ii) small (i.e., ergonomics assessment of wearable robots.

  11. Motor contagion during human-human and human-robot interaction.

    Directory of Open Access Journals (Sweden)

    Ambra Bisio

    Full Text Available Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot. After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  12. Motor contagion during human-human and human-robot interaction.

    Science.gov (United States)

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  13. Robot 2015 : Second Iberian Robotics Conference : Advances in Robotics

    CERN Document Server

    Moreira, António; Lima, Pedro; Montano, Luis; Muñoz-Martinez, Victor

    2016-01-01

    This book contains a selection of papers accepted for presentation and discussion at ROBOT 2015: Second Iberian Robotics Conference, held in Lisbon, Portugal, November 19th-21th, 2015. ROBOT 2015 is part of a series of conferences that are a joint organization of SPR – “Sociedade Portuguesa de Robótica/ Portuguese Society for Robotics”, SEIDROB – Sociedad Española para la Investigación y Desarrollo de la Robótica/ Spanish Society for Research and Development in Robotics and CEA-GTRob – Grupo Temático de Robótica/ Robotics Thematic Group. The conference organization had also the collaboration of several universities and research institutes, including: University of Minho, University of Porto, University of Lisbon, Polytechnic Institute of Porto, University of Aveiro, University of Zaragoza, University of Malaga, LIACC, INESC-TEC and LARSyS. Robot 2015 was focussed on the Robotics scientific and technological activities in the Iberian Peninsula, although open to research and delegates from other...

  14. Everyday robotic action: Lessons from human action control

    Directory of Open Access Journals (Sweden)

    Roy eDe Kleijn

    2014-03-01

    Full Text Available Robots are increasingly capable of performing everyday human activities such as cooking, cleaning, and doing the laundry. This requires the real-time planning and execution of complex, temporally-extended sequential actions under high degrees of uncertainty, which provides many challenges to traditional approaches to robot action control. We argue that important lessons in this respect can be learned from research on human action control. We provide a brief overview of available psychological insights into this issue and focus on four principles that we think could be particularly beneficial for robot control: the integration of symbolic and subsymbolic planning of action sequences, the integration of feedforward and feedback control, the clustering of complex actions into subcomponents, and the contextualization of action-control structures through goal representations.

  15. Prospects of robotics in food industry

    Directory of Open Access Journals (Sweden)

    Jamshed IQBAL

    Full Text Available Abstract Technological advancements in various domains have broadened the application horizon of robotics to an incredible extent. Highlighting a very recent application area, this paper presents a comprehensive review of robotics application in food industry. Robots essentially have the potential to transform the processes in food processing and handling, palletizing and packing and food serving. Therefore, recent years witnessed tremendously increased trend of robots deployment in food sector. Consequently, the aspects related with robot kinematics, dynamics, hygiene, economic efficiency, human-robot interaction, safety and protection and operation and maintenance are of critical importance and are discussed in the present review. A comparison of actual robots being used in the industry is also presented. The review reveals that the food serving sector is the new potential area in which ample research opportunities exist by integrating advancements from various technology domains. It is anticipated that wider dissemination of research developments in ‘robo-food’ will stimulate more collaborations among the research community and contribute to further developments.

  16. Exploring cultural factors in human-robot interaction : A matter of personality?

    NARCIS (Netherlands)

    Weiss, Astrid; Evers, Vanessa

    2011-01-01

    This paper proposes an experimental study to investigate task-dependence and cultural-background dependence of the personality trait attribution on humanoid robots. In Human-Robot Interaction, as well as in Human-Agent Interaction research, the attribution of personality traits towards intelligent

  17. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development

    Directory of Open Access Journals (Sweden)

    Shanee Honig

    2018-06-01

    Full Text Available While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI, and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI, human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP, that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1 communicating failures, (2 perception and comprehension of failures, and (3 solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a

  18. Collaborative Research in the Digital Humanities

    CERN Document Server

    Deegan, Marilyn

    2012-01-01

    Collaboration within digital humanities is both a pertinent and a pressing topic as the traditional mode of the humanist, working alone in his or her study, is supplemented by explicitly co-operative, interdependent and collaborative research. This is particularly true where computational methods are employed in large-scale digital humanities projects. This book, which celebrates the contributions of Harold Short to this field, presents fourteen essays by leading authors in the digital humanities. It addresses several issues of collaboration, from the multiple perspectives of institutions, pro

  19. Educational Robotics as Mindtools

    Science.gov (United States)

    Mikropoulos, Tassos A.; Bellou, Ioanna

    2013-01-01

    Although there are many studies on the constructionist use of educational robotics, they have certain limitations. Some of them refer to robotics education, rather than educational robotics. Others follow a constructionist approach, but give emphasis only to design skills, creativity and collaboration. Some studies use robotics as an educational…

  20. A Social Cognitive Neuroscience Stance on Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Chaminade Thierry

    2011-12-01

    Full Text Available Robotic devices, thanks to the controlled variations in their appearance and behaviors, provide useful tools to test hypotheses pertaining to social interactions. These agents were used to investigate one theoretical framework, resonance, which is defined, at the behavioral and neural levels, as an overlap between first- and third- person representations of mental states such as motor intentions or emotions. Behaviorally, we found a reduced, but significant, resonance towards a humanoid robot displaying biological motion, compared to a human. Using neuroimaging, we've reported that while perceptual processes in the human occipital and temporal lobe are more strongly engaged when perceiving a humanoid robot than a human action, activity in areas involved in motor resonance depends on attentional modulation for artificial agent more strongly than for human agents. Altogether, these studies using artificial agents offer valuable insights into the interaction of bottom-up and top-down processes in the perception of artificial agents.

  1. Human-like robots as platforms for electroactive polymers (EAP)

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2008-03-01

    Human-like robots, which have been a science fiction for many years, are increasingly becoming an engineering reality thanks to many technology advances in recent years. Humans have always sought to imitate the human appearance, functions and intelligence and as the capability progresses they may become our household appliance or even companion. Biomimetic technologies are increasingly becoming common tools to support the development of such robots. As artificial muscles, electroactive polymers (EAP) are offering important actuation capability for making such machines lifelike. The current limitations of EAP are hampering the possibilities that can be adapted in such robots but progress is continually being made. As opposed to other human made machines and devices, this technology raises various questions and concerns that need to be addressed. These include the need to prevent accidents, deliberate harm, or their use in crimes. In this paper the state-of-the-art and the challenges will be reviewed.

  2. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  3. Bio-inspired motion planning algorithms for autonomous robots facilitating greater plasticity for security applications

    Science.gov (United States)

    Guo, Yi; Hohil, Myron; Desai, Sachi V.

    2007-10-01

    Proposed are techniques toward using collaborative robots for infrastructure security applications by utilizing them for mobile sensor suites. A vast number of critical facilities/technologies must be protected against unauthorized intruders. Employing a team of mobile robots working cooperatively can alleviate valuable human resources. Addressed are the technical challenges for multi-robot teams in security applications and the implementation of multi-robot motion planning algorithm based on the patrolling and threat response scenario. A neural network based methodology is exploited to plan a patrolling path with complete coverage. Also described is a proof-of-principle experimental setup with a group of Pioneer 3-AT and Centibot robots. A block diagram of the system integration of sensing and planning will illustrate the robot to robot interaction to operate as a collaborative unit. The proposed approach singular goal is to overcome the limits of previous approaches of robots in security applications and enabling systems to be deployed for autonomous operation in an unaltered environment providing access to an all encompassing sensor suite.

  4. An Integrated Human System Interaction (HSI) Framework for Human-Agent Team Collaboration, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — As space missions become more complex and as mission demands increase, robots, human-robot mixed initiative teams and software autonomy applications are needed to...

  5. A Taxonomy of Human-Agent Team Collaborations

    NARCIS (Netherlands)

    Neef, R.M.

    2006-01-01

    Future command teams will be heavily supported by artificial actors. This paper introduces a taxonomy of collaboration types in human – agent teams. Using two classifying dimensions, coordination type and collaboration type, eight different classes of human – agent collaborations transpire. These

  6. Ghost-in-the-Machine reveals human social signals for human-robot interaction.

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P

    2015-01-01

    We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.

  7. Surface Support Systems for Co-Operative and Integrated Human/Robotic Lunar Exploration

    Science.gov (United States)

    Mueller, Robert P.

    2006-01-01

    Human and robotic partnerships to realize space goals can enhance space missions and provide increases in human productivity while decreasing the hazards that the humans are exposed to. For lunar exploration, the harsh environment of the moon and the repetitive nature of the tasks involved with lunar outpost construction, maintenance and operation as well as production tasks associated with in-situ resource utilization, make it highly desirable to use robotic systems in co-operation with human activity. A human lunar outpost is functionally examined and concepts for selected human/robotic tasks are discussed in the context of a lunar outpost which will enable the presence of humans on the moon for extended periods of time.

  8. Collaborative Assistive Robot for Mobility Enhancement (CARMEN) The bare necessities assisted wheelchair navigation and beyond

    CERN Document Server

    Urdiales, Cristina

    2012-01-01

    In nowadays aging society, many people require mobility assistance. Sometimes, assistive devices need a certain degree of autonomy when users' disabilities difficult manual control. However, clinicians report that excessive assistance may lead to loss of residual skills and frustration. Shared control focuses on deciding when users need help and providing it. Collaborative control aims at giving just the right amount of help in a transparent, seamless way. This book presents the collaborative control paradigm. User performance may be indicative of physical/cognitive condition, so it is used to decide how much help is needed. Besides, collaborative control integrates machine and user commands so that people contribute to self-motion at all times. Collaborative control was extensively tested for 3 years using a robotized wheelchair at a rehabilitation hospital in Rome with volunteer inpatients presenting different disabilities, ranging from mild to severe. We also present a taxonomy of common metrics for wheelc...

  9. Abstract robots with an attitude : applying interpersonal relation models to human-robot interaction

    NARCIS (Netherlands)

    Hiah, J.L.; Beursgens, L.; Haex, R.; Perez Romero, L.M.; Teh, Y.; Bhomer, ten M.; Berkel, van R.E.A.; Barakova, E.I.

    2013-01-01

    This paper explores new possibilities for social interaction between a human user and a robot with an abstract shape. The social interaction takes place by simulating behaviors such as submissiveness and dominance and analyzing the corresponding human reactions. We used an object that has no

  10. Warning Signals for Poor Performance Improve Human-Robot Interaction

    NARCIS (Netherlands)

    van den Brule, Rik; Bijlstra, Gijsbert; Dotsch, Ron; Haselager, Pim; Wigboldus, Daniel HJ

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot’s nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the

  11. Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction.

    Science.gov (United States)

    Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank

    2017-12-14

    Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.

  12. Goal inferences about robot behavior : goal inferences and human response behaviors

    NARCIS (Netherlands)

    Broers, H.A.T.; Ham, J.R.C.; Broeders, R.; De Silva, P.; Okada, M.

    2014-01-01

    This explorative research focused on the goal inferences human observers draw based on a robot's behavior, and the extent to which those inferences predict people's behavior in response to that robot. Results show that different robot behaviors cause different response behavior from people.

  13. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  14. Companies' human capital required for collaboration

    DEFF Research Database (Denmark)

    Albats, Ekaterina; Bogers, Marcel; Podmetina, Daria

    building, relationship building, IPR management and negotiation for the context of collaboration with universities. Our research has revealed an importance of expectation management skills for university-industry collaboration (UIC) context. We found that human capital for UIC is to be continuously......Universities are widely acknowledged as an important source of knowledge for corporate innovation, and collaboration with universities plays an important role in companies’ open innovation strategy. However, little is known about the human capital components required for collaboration...... with universities. Analysing the results of the survey among over 500 company managers we define the universal employees’ skills required for company’ successful collaborations with external stakeholders. Then through analysing qualitative interviews data we distinguish between these skills and capabilities...

  15. A human-oriented framework for developing assistive service robots.

    Science.gov (United States)

    McGinn, Conor; Cullinan, Michael F; Culleton, Mark; Kelly, Kevin

    2018-04-01

    Multipurpose robots that can perform a range of useful tasks have the potential to increase the quality of life for many people living with disabilities. Owing to factors such as high system complexity, as-yet unresolved research questions and current technology limitations, there is a need for effective strategies to coordinate the development process. Integrating established methodologies based on human-centred design and universal design, a framework was formulated to coordinate the robot design process over successive iterations of prototype development. An account is given of how the framework was practically applied to the problem of developing a personal service robot. Application of the framework led to the formation of several design goals which addressed a wide range of identified user needs. The resultant prototype solution, which consisted of several component elements, succeeded in demonstrating the performance stipulated by all of the proposed metrics. Application of the framework resulted in the development of a complex prototype that addressed many aspects of the functional and usability requirements of a personal service robot. Following the process led to several important insights which directly benefit the development of subsequent prototypes. Implications for Rehabilitation This research shows how universal design might be used to formulate usability requirements for assistive service robots. A framework is presented that guides the process of designing service robots in a human-centred way. Through practical application of the framework, a prototype robot system that addressed a range of identified user needs was developed.

  16. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence

    DEFF Research Database (Denmark)

    Liu, Hin-Yan; Zawieska, Karolina

    2017-01-01

    impulse by proposing a complementary set of human rights directed specifically against the harms arising from robotic and artificial intelligence (AI) technologies. The relationship between responsibilities of the agent and the rights of the patient suggest that a rights regime is the other side...... to act responsibly. This subsists within a larger phenomenon where the difference between humans and non-humans, be it animals or artificial systems, appears to be increasingly blurred, thereby disrupting orthodox understandings of responsibility. This paper seeks to supplement the responsible robotics...

  17. Robotic situational awareness of actions in human teaming

    Science.gov (United States)

    Tahmoush, Dave

    2015-06-01

    When robots can sense and interpret the activities of the people they are working with, they become more of a team member and less of just a piece of equipment. This has motivated work on recognizing human actions using existing robotic sensors like short-range ladar imagers. These produce three-dimensional point cloud movies which can be analyzed for structure and motion information. We skeletonize the human point cloud and apply a physics-based velocity correlation scheme to the resulting joint motions. The twenty actions are then recognized using a nearest-neighbors classifier that achieves good accuracy.

  18. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction

    Science.gov (United States)

    2011-10-01

    directly affects the willingness of people to accept robot -produced information, follow robots ’ suggestions, and thus benefit from the advantages inherent...perceived complexity of operation). Consequently, if the perceived risk of using the robot exceeds its perceived benefit , practical operators almost...necessary presence of a human caregiver (Graf, Hans, & Schraft, 2004). Other robotic devices, such as wheelchairs (Yanco, 2001) and exoskeletons (e.g

  19. Singularity now: using the ventricular assist device as a model for future human-robotic physiology.

    Science.gov (United States)

    Martin, Archer K

    2016-04-01

    In our 21 st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today's world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named "IshBot") can best be studied in ventricular assist devices - VAD.

  20. Compliance control based on PSO algorithm to improve the feeling during physical human-robot interaction.

    Science.gov (United States)

    Jiang, Zhongliang; Sun, Yu; Gao, Peng; Hu, Ying; Zhang, Jianwei

    2016-01-01

    Robots play more important roles in daily life and bring us a lot of convenience. But when people work with robots, there remain some significant differences in human-human interactions and human-robot interaction. It is our goal to make robots look even more human-like. We design a controller which can sense the force acting on any point of a robot and ensure the robot can move according to the force. First, a spring-mass-dashpot system was used to describe the physical model, and the second-order system is the kernel of the controller. Then, we can establish the state space equations of the system. In addition, the particle swarm optimization algorithm had been used to obtain the system parameters. In order to test the stability of system, the root-locus diagram had been shown in the paper. Ultimately, some experiments had been carried out on the robotic spinal surgery system, which is developed by our team, and the result shows that the new controller performs better during human-robot interaction.

  1. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    Science.gov (United States)

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  2. Robotic hip arthroscopy in human anatomy.

    Science.gov (United States)

    Kather, Jens; Hagen, Monika E; Morel, Philippe; Fasel, Jean; Markar, Sheraz; Schueler, Michael

    2010-09-01

    Robotic technology offers technical advantages that might offer new solutions for hip arthroscopy. Two hip arthroscopies were performed in human cadavers using the da Vinci surgical system. During both surgeries, a robotic camera and 5 or 8 mm da Vinci trocars with instruments were inserted into the hip joint for manipulation. Introduction of cameras and working instruments, docking of the robotic system and instrument manipulation was successful in both cases. The long articulating area of 5 mm instruments limited movements inside the joint; an 8 mm instrument with a shorter area of articulation offered an improved range of motion. Hip arthroscopy using the da Vinci standard system appears a feasible alternative to standard arthroscopy. Instruments and method of application must be modified and improved before routine clinical application but further research in this area seems justified, considering the clinical value of such an approach. Copyright 2010 John Wiley & Sons, Ltd.

  3. Analytical basis for evaluating the effect of unplanned interventions on the effectiveness of a human-robot system

    International Nuclear Information System (INIS)

    Shah, Julie A.; Saleh, Joseph H.; Hoffman, Jeffrey A.

    2008-01-01

    Increasing prevalence of human-robot systems in a variety of applications raises the question of how to design these systems to best leverage the capabilities of humans and robots. In this paper, we address the relationships between reliability, productivity, and risk to humans from human-robot systems operating in a hostile environment. Objectives for maximizing the effectiveness of a human-robot system are presented, which capture these coupled relationships, and reliability parameters are proposed to characterize unplanned interventions between a human and robot. The reliability metrics defined here take on an expanded meaning in which the underlying concept of failure in traditional reliability analysis is replaced by the notion of intervention. In the context of human-robotic systems, an intervention is not only driven by component failures, but includes many other factors that can make a robotic agent to request or a human agent to provide intervention, as we argue in this paper. The effect of unplanned interventions on the effectiveness of human-robot systems is then investigated analytically using traditional reliability analysis. Finally, we discuss the implications of these analytical trends on the design and evaluation of human-robot systems

  4. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement

    Science.gov (United States)

    Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554

  5. Intelligent Interaction for Human-Friendly Service Robot in Smart House Environment

    Directory of Open Access Journals (Sweden)

    Z. Zenn Bien

    2008-01-01

    Full Text Available The smart house under consideration is a service-integrated complex system to assist older persons and/or people with disabilities. The primary goal of the system is to achieve independent living by various robotic devices and systems. Such a system is treated as a human-in-the loop system in which human- robot interaction takes place intensely and frequently. Based on our experiences of having designed and implemented a smart house environment, called Intelligent Sweet Home (ISH, we present a framework of realizing human-friendly HRI (human-robot interaction module with various effective techniques of computational intelligence. More specifically, we partition the robotic tasks of HRI module into three groups in consideration of the level of specificity, fuzziness or uncertainty of the context of the system, and present effective interaction method for each case. We first show a task planning algorithm and its architecture to deal with well-structured tasks autonomously by a simplified set of commands of the user instead of inconvenient manual operations. To provide with capability of interacting in a human-friendly way in a fuzzy context, it is proposed that the robot should make use of human bio-signals as input of the HRI module as shown in a hand gesture recognition system, called a soft remote control system. Finally we discuss a probabilistic fuzzy rule-based life-long learning system, equipped with intention reading capability by learning human behavioral patterns, which is introduced as a solution in uncertain and time-varying situations.

  6. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  7. Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.

    Science.gov (United States)

    Borenstein, Jason; Arkin, Ron

    2016-02-01

    Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to "nudge" their human users in the direction of being "more ethical". More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture "socially just" tendencies in their human counterparts. Designing technological artifacts in such a way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation.

  8. Ontological Reasoning for Human-Robot Teaming in Search and Rescue Missions

    NARCIS (Netherlands)

    Bagosi, T.; Hindriks, k.V.; Neerincx, M.A.

    2016-01-01

    In search and rescue missions robots are used to help rescue workers in exploring the disaster site. Our research focuses on how multiple robots and rescuers act as a team, and build up situation awareness. We propose a multi-agent system where each agent supports one member, either human or robot.

  9. Integrated Human-Robotic Missions to the Moon and Mars: Mission Operations Design Implications

    Science.gov (United States)

    Mishkin, Andrew; Lee, Young; Korth, David; LeBlanc, Troy

    2007-01-01

    For most of the history of space exploration, human and robotic programs have been independent, and have responded to distinct requirements. The NASA Vision for Space Exploration calls for the return of humans to the Moon, and the eventual human exploration of Mars; the complexity of this range of missions will require an unprecedented use of automation and robotics in support of human crews. The challenges of human Mars missions, including roundtrip communications time delays of 6 to 40 minutes, interplanetary transit times of many months, and the need to manage lifecycle costs, will require the evolution of a new mission operations paradigm far less dependent on real-time monitoring and response by an Earthbound operations team. Robotic systems and automation will augment human capability, increase human safety by providing means to perform many tasks without requiring immediate human presence, and enable the transfer of traditional mission control tasks from the ground to crews. Developing and validating the new paradigm and its associated infrastructure may place requirements on operations design for nearer-term lunar missions. The authors, representing both the human and robotic mission operations communities, assess human lunar and Mars mission challenges, and consider how human-robot operations may be integrated to enable efficient joint operations, with the eventual emergence of a unified exploration operations culture.

  10. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed

    Directory of Open Access Journals (Sweden)

    Anne Reuten

    2018-05-01

    Full Text Available Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley and comparable emotional responses between humans and robots (media equation. However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.

  11. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    Directory of Open Access Journals (Sweden)

    Hooman Samani

    2013-12-01

    Full Text Available In this paper, we have investigated the concept of “Cultural Robotics” with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. According to the importance of the embodiment of robots in the sense of presence, the influence of robots in communication culture is anticipated. The sustainability of robotics culture based on diversity for cultural communities for various acceptance modalities is explored in order to anticipate the creation of different attributes of culture between robots and humans in the future.

  12. Automation and Robotics for Human Mars Exploration (AROMA)

    Science.gov (United States)

    Hofmann, Peter; von Richter, Andreas

    2003-01-01

    Automation and Robotics (A&R) systems are a key technology for Mars exploration. All over the world initiatives in this field aim at developing new A&R systems and technologies for planetary surface exploration. From December 2000 to February 2002 Kayser-Threde GmbH, Munich, Germany lead a study called AROMA (Automation and Robotics for Human Mars Exploration) under ESA contract in order to define a reference architecture of A&R elements in support of a human Mars exploration program. One of the goals of this effort is to initiate new developments and to maintain the competitiveness of European industry within this field. c2003 Published by Elsevier Science Ltd.

  13. Towards Human-Friendly Efficient Control of Multi-Robot Teams

    Science.gov (United States)

    Stoica, Adrian; Theodoridis, Theodoros; Barrero, David F.; Hu, Huosheng; McDonald-Maiers, Klaus

    2013-01-01

    This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context.

  14. Human Assisted Robotic Vehicle Studies - A conceptual end-to-end mission architecture

    Science.gov (United States)

    Lehner, B. A. E.; Mazzotta, D. G.; Teeney, L.; Spina, F.; Filosa, A.; Pou, A. Canals; Schlechten, J.; Campbell, S.; Soriano, P. López

    2017-11-01

    With current space exploration roadmaps indicating the Moon as a proving ground on the way to human exploration of Mars, it is clear that human-robotic partnerships will play a key role for successful future human space missions. This paper details a conceptual end-to-end architecture for an exploration mission in cis-lunar space with a focus on human-robot interactions, called Human Assisted Robotic Vehicle Studies (HARVeSt). HARVeSt will build on knowledge of plant growth in space gained from experiments on-board the ISS and test the first growth of plants on the Moon. A planned deep space habitat will be utilised as the base of operations for human-robotic elements of the mission. The mission will serve as a technology demonstrator not only for autonomous tele-operations in cis-lunar space but also for key enabling technologies for future human surface missions. The successful approach of the ISS will be built on in this mission with international cooperation. Mission assets such as a modular rover will allow for an extendable mission and to scout and prepare the area for the start of an international Moon Village.

  15. Physiological and subjective evaluation of a human-robot object hand-over task.

    Science.gov (United States)

    Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël

    2011-11-01

    In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.

  16. Ninety-six hours to build a prototype robot showing human emotions

    CERN Multimedia

    Stefania Pandolfi

    2016-01-01

    Thirty-five Master's students in the fields of business, design and engineering participated in an intensive five-day project-based introduction to programming and advanced electronics. The goal of the initiative was to build a fully functional prototype robot able to communicate and show at least four basic human emotions.    A group of students is presenting a prototype robot showing human emotions at IdeaSquare. With no previous experience in electronics or coding, groups of students from Portugal, Italy, Norway and Estonia were introduced to the basics of sensors, integrated circuits and actuators, and after just 96 hours they presented their functioning robots at IdeaSquare on Friday, 15 January. These robots, mostly built around Arduino boards and recycled materials, were able to display different human emotions as a response to external environmental inputs. The five-day workshop, called öBot, was organised by the IdeaSquare te...

  17. Becoming Earth Independent: Human-Automation-Robotics Integration Challenges for Future Space Exploration

    Science.gov (United States)

    Marquez, Jessica J.

    2016-01-01

    Future exploration missions will require NASA to integrate more automation and robotics in order to accomplish mission objectives. This presentation will describe on the future challenges facing the human operator (astronaut, ground controllers) as we increase the amount of automation and robotics in spaceflight operations. It will describe how future exploration missions will have to adapt and evolve in order to deal with more complex missions and communication latencies. This presentation will outline future human-automation-robotic integration challenges.

  18. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  19. Multimodal interaction for human-robot teams

    Science.gov (United States)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  20. Utilization of Human-Like Pelvic Rotation for Running Robot

    Directory of Open Access Journals (Sweden)

    Takuya eOtani

    2015-07-01

    Full Text Available The spring loaded inverted pendulum (SLIP is used to model human running. It is based on a characteristic feature of human running, in which the linear-spring-like motion of the standing leg is produced by the joint stiffness of the knee and ankle. Although this model is widely used in robotics, it does not include human-like pelvic motion. In this study, we show that the pelvis actually contributes to the increase in jumping force and absorption of landing impact. On the basis of this finding, we propose a new model, SLIP2 (spring loaded inverted pendulum with pelvis, to improve running in humanoid robots. The model is composed of a body mass, a pelvis, and leg springs, and, it can control its springs while running by use of pelvic movement in the frontal plane. To achieve running motions, we developed a running control system that includes a pelvic oscillation controller to attain control over jumping power and a landing placement controller to adjust the running speed. We also developed a new running robot by using the SLIP2 model and performed hopping and running experiments to evaluate the model. The developed robot could accomplish hopping motions only by pelvic movement. The results also established that the difference between the pelvic rotational phase and the oscillation phase of the vertical mass displacement affects the jumping force. In addition, the robot demonstrated the ability to run with a foot placement controller depending on the reference running speed.

  1. Robots, Disability, and Good Human Life

    Directory of Open Access Journals (Sweden)

    Antonio Carnevale

    2015-02-01

    Full Text Available In this paper, I want to show the role that emerging robotic technologies could play in the future in daily life of disabled people. When I talk about disability, I mean any temporary or permanent limitation due to a chronic disease and deficit, as well as, socially disadvantaged conditions, which imply functional and emotional restrictions experienced at any age. All these limitations can be characterized by a specific mental and physical impairment or, more often, by a cluster of medical impairments and social barriers. To this end, the academic literature has generally differentiated between two disability models: 'medical' versus 'social'. The main attempt of this paper consists into showing how the development of robotic technologies — particularly in assistive and healthcare fields — could allow us to go beyond this outdated dichotomy, contributing to create new philosophical premises to rethink the universality of the human condition, that is, the sense of what we intend for 'good human life'.

  2. Exploring child-robot engagement in a collaborative task

    NARCIS (Netherlands)

    Zaga, Cristina; Truong, Khiet Phuong; Lohse, M.; Evers, Vanessa

    Imagine a room with toys scattered on the floor and a robot that is motivating a small group of children to tidy up. This scenario poses real-world challenges for the robot, e.g., the robot needs to navigate autonomously in a cluttered environment, it needs to classify and grasp objects, and it

  3. Strategies for human-driven robot comprehension of spatial descriptions by older adults in a robot fetch task.

    Science.gov (United States)

    Carlson, Laura; Skubic, Marjorie; Miller, Jared; Huo, Zhiyu; Alexenko, Tatiana

    2014-07-01

    This contribution presents a corpus of spatial descriptions and describes the development of a human-driven spatial language robot system for their comprehension. The domain of application is an eldercare setting in which an assistive robot is asked to "fetch" an object for an elderly resident based on a natural language spatial description given by the resident. In Part One, we describe a corpus of naturally occurring descriptions elicited from a group of older adults within a virtual 3D home that simulates the eldercare setting. We contrast descriptions elicited when participants offered descriptions to a human versus robot avatar, and under instructions to tell the addressee how to find the target versus where the target is. We summarize the key features of the spatial descriptions, including their dynamic versus static nature and the perspective adopted by the speaker. In Part Two, we discuss critical cognitive and perceptual processing capabilities necessary for the robot to establish a common ground with the human user and perform the "fetch" task. Based on the collected corpus, we focus here on resolving the perspective ambiguity and recognizing furniture items used as landmarks in the descriptions. Taken together, the work presented here offers the key building blocks of a robust system that takes as input natural spatial language descriptions and produces commands that drive the robot to successfully fetch objects within our eldercare scenario. Copyright © 2014 Cognitive Science Society, Inc.

  4. Human Hand Motion Analysis and Synthesis of Optimal Power Grasps for a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Francesca Cordella

    2014-03-01

    Full Text Available Biologically inspired robotic systems can find important applications in biomedical robotics, since studying and replicating human behaviour can provide new insights into motor recovery, functional substitution and human-robot interaction. The analysis of human hand motion is essential for collecting information about human hand movements useful for generalizing reaching and grasping actions on a robotic system. This paper focuses on the definition and extraction of quantitative indicators for describing optimal hand grasping postures and replicating them on an anthropomorphic robotic hand. A motion analysis has been carried out on six healthy human subjects performing a transverse volar grasp. The extracted indicators point to invariant grasping behaviours between the involved subjects, thus providing some constraints for identifying the optimal grasping configuration. Hence, an optimization algorithm based on the Nelder-Mead simplex method has been developed for determining the optimal grasp configuration of a robotic hand, grounded on the aforementioned constraints. It is characterized by a reduced computational cost. The grasp stability has been tested by introducing a quality index that satisfies the form-closure property. The grasping strategy has been validated by means of simulation tests and experimental trials on an arm-hand robotic system. The obtained results have shown the effectiveness of the extracted indicators to reduce the non-linear optimization problem complexity and lead to the synthesis of a grasping posture able to replicate the human behaviour while ensuring grasp stability. The experimental results have also highlighted the limitations of the adopted robotic platform (mainly due to the mechanical structure to achieve the optimal grasp configuration.

  5. Robotics Algorithms Provide Nutritional Guidelines

    Science.gov (United States)

    2009-01-01

    On July 5, 1997, a small robot emerged from its lander like an insect from an egg, crawling out onto the rocky surface of Mars. About the size of a child s wagon, NASA s Sojourner robot was the first successful rover mission to the Red Planet. For 83 sols (Martian days, typically about 40 minutes longer than Earth days), Sojourner - largely remote controlled by NASA operators on Earth - transmitted photos and data unlike any previously collected. Sojourner was perhaps the crowning achievement of the NASA Space Telerobotics Program, an Agency initiative designed to push the limits of robotics in space. Telerobotics - devices that merge the autonomy of robotics with the direct human control of teleoperators - was already a part of NASA s efforts; probes like the Viking landers that preceded Sojourner on Mars, for example, were telerobotic applications. The Space Telerobotics Program, a collaboration between Ames Research Center, Johnson Space Center, Jet Propulsion Laboratory (JPL), and multiple universities, focused on developing remote-controlled robotics for three main purposes: on-orbit assembly and servicing, science payload tending, and planetary surface robotics. The overarching goal was to create robots that could be guided to build structures in space, monitor scientific experiments, and, like Sojourner, scout distant planets in advance of human explorers. While telerobotics remains a significant aspect of NASA s efforts, as evidenced by the currently operating Spirit and Opportunity Mars rovers, the Hubble Space Telescope, and many others - the Space Telerobotics Program was dissolved and redistributed within the Agency the same year as Sojourner s success. The program produced a host of remarkable technologies and surprising inspirations, including one that is changing the way people eat

  6. Socially intelligent robots that understand and respond to human touch

    NARCIS (Netherlands)

    Jung, Merel Madeleine

    Touch is an important nonverbal form of interpersonal interaction which is used to communicate emotions and other social messages. As interactions with social robots are likely to become more common in the near future these robots should also be able to engage in tactile interaction with humans.

  7. Enhancing the effectiveness of human-robot teaming with a closed-loop system.

    Science.gov (United States)

    Teo, Grace; Reinerman-Jones, Lauren; Matthews, Gerald; Szalma, James; Jentsch, Florian; Hancock, Peter

    2018-02-01

    With technological developments in robotics and their increasing deployment, human-robot teams are set to be a mainstay in the future. To develop robots that possess teaming capabilities, such as being able to communicate implicitly, the present study implemented a closed-loop system. This system enabled the robot to provide adaptive aid without the need for explicit commands from the human teammate, through the use of multiple physiological workload measures. Such measures of workload vary in sensitivity and there is large inter-individual variability in physiological responses to imposed taskload. Workload models enacted via closed-loop system should accommodate such individual variability. The present research investigated the effects of the adaptive robot aid vs. imposed aid on performance and workload. Results showed that adaptive robot aid driven by an individualized workload model for physiological response resulted in greater improvements in performance compared to aid that was simply imposed by the system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Acquisition of Human Operation Characteristics for Kite-based Tethered Flying Robot using Human Operation Data

    OpenAIRE

    Todoroki, Chiaki; Takahashi, Yasutake; Nakamura, Takayuki

    2015-01-01

    This paper shows human skill acquisition systems to control the kite-based tethered flying robot. The kite-based tethered flying robot has been proposed as a flying observation system with long-term activity capability[1]. It is a relatively new system and aimed to complement other information gathering systems using a balloon or an air vehicle. This paper shows some approaches of human operation characteristics acquisition based on fuzzy learning controller, knearest neighbor algorithm, and ...

  9. Human Factors Principles in Design of Computer-Mediated Visualization for Robot Missions

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; David J Bruemmer

    2008-12-01

    With increased use of robots as a resource in missions supporting countermine, improvised explosive devices (IEDs), and chemical, biological, radiological nuclear and conventional explosives (CBRNE), fully understanding the best means by which to complement the human operator’s underlying perceptual and cognitive processes could not be more important. Consistent with control and display integration practices in many other high technology computer-supported applications, current robotic design practices rely highly upon static guidelines and design heuristics that reflect the expertise and experience of the individual designer. In order to use what we know about human factors (HF) to drive human robot interaction (HRI) design, this paper reviews underlying human perception and cognition principles and shows how they were applied to a threat detection domain.

  10. Evidence Report, Risk of Inadequate Design of Human and Automation/Robotic Integration

    Science.gov (United States)

    Zumbado, Jennifer Rochlis; Billman, Dorrit; Feary, Mike; Green, Collin

    2011-01-01

    The success of future exploration missions depends, even more than today, on effective integration of humans and technology (automation and robotics). This will not emerge by chance, but by design. Both crew and ground personnel will need to do more demanding tasks in more difficult conditions, amplifying the costs of poor design and the benefits of good design. This report has looked at the importance of good design and the risks from poor design from several perspectives: 1) If the relevant functions needed for a mission are not identified, then designs of technology and its use by humans are unlikely to be effective: critical functions will be missing and irrelevant functions will mislead or drain attention. 2) If functions are not distributed effectively among the (multiple) participating humans and automation/robotic systems, later design choices can do little to repair this: additional unnecessary coordination work may be introduced, workload may be redistributed to create problems, limited human attentional resources may be wasted, and the capabilities of both humans and technology underused. 3) If the design does not promote accurate understanding of the capabilities of the technology, the operators will not use the technology effectively: the system may be switched off in conditions where it would be effective, or used for tasks or in contexts where its effectiveness may be very limited. 4) If an ineffective interaction design is implemented and put into use, a wide range of problems can ensue. Many involve lack of transparency into the system: operators may be unable or find it very difficult to determine a) the current state and changes of state of the automation or robot, b) the current state and changes in state of the system being controlled or acted on, and c) what actions by human or by system had what effects. 5) If the human interfaces for operation and control of robotic agents are not designed to accommodate the unique points of view and

  11. A journey from robot to digital human mathematical principles and applications with MATLAB programming

    CERN Document Server

    Gu, Edward Y L

    2013-01-01

    This book provides readers with a solid set of diversified and essential tools for the theoretical modeling and control of complex robotic systems, as well as for digital human modeling and realistic motion generation. Following a comprehensive introduction to the fundamentals of robotic kinematics, dynamics and control systems design, the author extends robotic modeling procedures and motion algorithms to a much higher-dimensional, larger scale and more sophisticated research area, namely digital human modeling. Most of the methods are illustrated by MATLAB™ codes and sample graphical visualizations, offering a unique closed loop between conceptual understanding and visualization. Readers are guided through practicing and creating 3D graphics for robot arms as well as digital human models in MATLAB™, and through driving them for real-time animation. This work is intended to serve as a robotics textbook with an extension to digital human modeling for senior undergraduate and graduate engineering students....

  12. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand.

    Science.gov (United States)

    Kent, Benjamin A; Engeberg, Erik D

    2014-11-07

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques.

  13. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand

    International Nuclear Information System (INIS)

    Kent, Benjamin A; Engeberg, Erik D

    2014-01-01

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques. (paper)

  14. Robot sex and consent : Is consent to sex between a robot and a human conceivable, possible, and desirable?

    NARCIS (Netherlands)

    Frank, L.; Nyholm, S.

    2017-01-01

    The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper

  15. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    Science.gov (United States)

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  16. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    Directory of Open Access Journals (Sweden)

    Fahed Awad

    2018-01-01

    Full Text Available Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  17. Acceptance and Attitudes Toward a Human-like Socially Assistive Robot by Older Adults.

    Science.gov (United States)

    Louie, Wing-Yue Geoffrey; McColl, Derek; Nejat, Goldie

    2014-01-01

    Recent studies have shown that cognitive and social interventions are crucial to the overall health of older adults including their psychological, cognitive, and physical well-being. However, due to the rapidly growing elderly population of the world, the resources and people to provide these interventions is lacking. Our work focuses on the use of social robotic technologies to provide person-centered cognitive interventions. In this article, we investigate the acceptance and attitudes of older adults toward the human-like expressive socially assistive robot Brian 2.1 in order to determine if the robot's human-like assistive and social characteristics would promote the use of the robot as a cognitive and social interaction tool to aid with activities of daily living. The results of a robot acceptance questionnaire administered during a robot demonstration session with a group of 46 elderly adults showed that the majority of the individuals had positive attitudes toward the socially assistive robot and its intended applications.

  18. Depth camera driven mobile robot for human localization and following

    DEFF Research Database (Denmark)

    Skordilis, Nikolaos; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2014-01-01

    In this paper the design and the development of a mobile robot able to locate and then follow a human target is described. Both the integration of the required mechatronics components and the development of appropriate software are covered. The main sensor of the developed mobile robot is an RGB-...

  19. Self-Organization and Human Robots

    Directory of Open Access Journals (Sweden)

    Chris Lucas

    2008-11-01

    Full Text Available Humans are rather funny things, we often tend to imagine that we are so ?special?, so divorced by our supposed ?intelligence? from the influences of the ?natural world? and so unique in our ?abstracting? abilities. We have this persistent delusion, evident since ancient Greek times, that we are ?rational?, that we can behave as ?disinterested observers? of our world, which manifests in AI thought today in a belief that, in a like manner, we can ?design?, God like, from afar, our replacements, those ?super-robots? that will do everything that we can imagine doing, but in much ?better? ways than we can achieve, and yet can avoid doing anything ?nasty?, i.e. can overcome our many human failings - obeying, I suppose, in the process, Asimov?s three ?laws of robotics?. Such human naiveté proves, in fact, to be quite amusing, at least to those of us ?schooled? in AI history. When we look at the aspirations and the expectations of our early ?pioneers?, and compare them to the actual reality of today, then we must, it seems, re-discover the meaning of the word ?humility?. Enthusiasm, good as it may be, needs to be moderated with a touch of ?common sense?, and if our current ways of doing things in our AI world don?t really work as we had hoped, then perhaps it is time to try something different (Lucas, C., 1999?

  20. Human-friendly robotic manipulators: safety and performance issues in controller design

    NARCIS (Netherlands)

    Tadele, T.S.

    2014-01-01

    Recent advances in robotics have spurred its adoption into new application areas such as medical, rescue, transportation, logistics, personal care and entertainment. In the personal care domain, robots are expected to operate in human-present environments and provide non-critical assistance.

  1. Moving NASA Beyond Low Earth Orbit: Future Human-Automation-Robotic Integration Challenges

    Science.gov (United States)

    Marquez, Jessica

    2016-01-01

    This presentation will provide an overview of current human spaceflight operations. It will also describe how future exploration missions will have to adapt and evolve in order to deal with more complex missions and communication latencies. Additionally, there are many implications regarding advanced automation and robotics, and this presentation will outline future human-automation-robotic integration challenges.

  2. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.

    Science.gov (United States)

    Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro

    2018-01-01

    In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  3. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

    Directory of Open Access Journals (Sweden)

    Yoshinobu Hagiwara

    2018-03-01

    Full Text Available In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA. Object recognition results using convolutional neural network (CNN, hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL, and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  4. Turn-taking cue delays in human-robot communication

    NARCIS (Netherlands)

    Cuijpers, R. H.; Van Den Goor, V. J.P.

    2017-01-01

    Fluent communication between a human and a robot relies on the use of effective turn-taking cues. In human speech staying silent after a sequence of utterances is usually accompanied by an explicit turnyielding cue to signal the end of a turn. Here we study the effect of the timing of four

  5. Fiscal 2000 report on result of R and D on robot system cooperating and coexisting with human beings. R and D on robot system cooperating and coexisting with human beings; 2000 nendo ningen kyocho kyozongata robot system kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-06-01

    A highly safe and reliable robot is being developed capable of cooperating with human beings and executing complicated operations in a human working/living space. This paper describes the fiscal 2000 results. Development of robot motion library was continued for extended task for providing services to people in care houses for the aged controlling motions of the humanoid robot. A basic design for a personal service system by the humanoid robot was conducted with the aim of nursing assistance and for the objective of developing a portable terminal type tele-operation device. A public and a home cockpit were researched with the purpose of developing user interfaces for telexistence control. A dynamic simulator for humanoid robots was built, with motions of standing-up and walking examined, in order to develop basic theories for the dual-handed tasks aided by the leg-arm cooperative motion. To develop a robot that properly and safely cooperates and coexists with the human beings, it is essential to obtain a dynamically reasonable and natural control law, so that the basic studies were conducted in this direction. With the purpose of developing a motion capture and learning system, a virtual robot platform and an information acquiring interface were developed. Studies were also conducted on modeling technique for achieving realistic material properties from high-precision image synthesis and actual images. (NEDO)

  6. Systematic Analysis of Video Data from Different Human-Robot Interaction Studies: A Categorisation of Social Signals During Error Situations

    OpenAIRE

    Manuel eGiuliani; Nicole eMirnig; Gerald eStollnberger; Susanne eStadler; Roland eBuchner; Manfred eTscheligi

    2015-01-01

    Human?robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human?robot interaction experiments. For that, we analyzed 201 videos of five human?robot interaction user studies with varying tasks from four independent projects. The analysis shows tha...

  7. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    Science.gov (United States)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.

  8. Three-dimensional computer-aided human factors engineering analysis of a grafting robot.

    Science.gov (United States)

    Chiu, Y C; Chen, S; Wu, G J; Lin, Y H

    2012-07-01

    The objective of this research was to conduct a human factors engineering analysis of a grafting robot design using computer-aided 3D simulation technology. A prototype tubing-type grafting robot for fruits and vegetables was the subject of a series of case studies. To facilitate the incorporation of human models into the operating environment of the grafting robot, I-DEAS graphic software was applied to establish individual models of the grafting robot in line with Jack ergonomic analysis. Six human models (95th percentile, 50th percentile, and 5th percentile by height for both males and females) were employed to simulate the operating conditions and working postures in a real operating environment. The lower back and upper limb stresses of the operators were analyzed using the lower back analysis (LBA) and rapid upper limb assessment (RULA) functions in Jack. The experimental results showed that if a leg space is introduced under the robot, the operator can sit closer to the robot, which reduces the operator's level of lower back and upper limbs stress. The proper environmental layout for Taiwanese operators for minimum levels of lower back and upper limb stress are to set the grafting operation at 23.2 cm away from the operator at a height of 85 cm and with 45 cm between the rootstock and scion units.

  9. Human Robotic Systems (HRS): Robonaut 2 Technologies Element

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the Robonaut 2 (R2) Technology Project Element within Human Robotic Systems (HRS) is to developed advanced technologies for infusion into the Robonaut 2...

  10. Integration of a Skill-based Collaborative Mobile Robot in a Smart Cyber-Physical Environment

    DEFF Research Database (Denmark)

    Andersen, Rasmus Eckholdt; Hansen, Emil Blixt; Cerny, David

    2017-01-01

    The goal of this paper is to investigate the benefits of integrating collaborative robotic manipulators with autonomous mobile platforms for flexible part feeding processes in an Industry 4.0 production facility. The paper presents Little Helper 6 (LH6), consisting of a MiR100, UR5, a Robotiq 3......-Finger Gripper and a task level software framework, called Skill Based System (SBS). The preliminary experiments performed with LH6, demonstrate that the capabilities of skill-based programming, 3D QR based calibration, part feeding, mapping and dynamic collision avoidance are successfully executed...

  11. A Dual Launch Robotic and Human Lunar Mission Architecture

    Science.gov (United States)

    Jones, David L.; Mulqueen, Jack; Percy, Tom; Griffin, Brand; Smitherman, David

    2010-01-01

    This paper describes a comprehensive lunar exploration architecture developed by Marshall Space Flight Center's Advanced Concepts Office that features a science-based surface exploration strategy and a transportation architecture that uses two launches of a heavy lift launch vehicle to deliver human and robotic mission systems to the moon. The principal advantage of the dual launch lunar mission strategy is the reduced cost and risk resulting from the development of just one launch vehicle system. The dual launch lunar mission architecture may also enhance opportunities for commercial and international partnerships by using expendable launch vehicle services for robotic missions or development of surface exploration elements. Furthermore, this architecture is particularly suited to the integration of robotic and human exploration to maximize science return. For surface operations, an innovative dual-mode rover is presented that is capable of performing robotic science exploration as well as transporting human crew conducting surface exploration. The dual-mode rover can be deployed to the lunar surface to perform precursor science activities, collect samples, scout potential crew landing sites, and meet the crew at a designated landing site. With this approach, the crew is able to evaluate the robotically collected samples to select the best samples for return to Earth to maximize the scientific value. The rovers can continue robotic exploration after the crew leaves the lunar surface. The transportation system for the dual launch mission architecture uses a lunar-orbit-rendezvous strategy. Two heavy lift launch vehicles depart from Earth within a six hour period to transport the lunar lander and crew elements separately to lunar orbit. In lunar orbit, the crew transfer vehicle docks with the lander and the crew boards the lander for descent to the surface. After the surface mission, the crew returns to the orbiting transfer vehicle for the return to the Earth. This

  12. A Proactive Approach of Robotic Framework for Making Eye Contact with Humans

    Directory of Open Access Journals (Sweden)

    Mohammed Moshiul Hoque

    2014-01-01

    Full Text Available Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view.

  13. Preparing for Humans at Mars, MPPG Updates to Strategic Knowledge Gaps and Collaboration with Science Missions

    Science.gov (United States)

    Baker, John; Wargo, Michael J.; Beaty, David

    2013-01-01

    The Mars Program Planning Group (MPPG) was an agency wide effort, chartered in March 2012 by the NASA Associate Administrator for Science, in collaboration with NASA's Associate Administrator for Human Exploration and Operations, the Chief Scientist, and the Chief Technologist. NASA tasked the MPPG to develop foundations for a program-level architecture for robotic exploration of Mars that is consistent with the President's challenge of sending humans to the Mars system in the decade of the 2030s and responsive to the primary scientific goals of the 2011 NRC Decadal Survey for Planetary Science. The Mars Exploration Program Analysis Group (MEPAG) also sponsored a Precursor measurement Strategy Analysis Group (P-SAG) to revisit prior assessments of required precursor measurements for the human exploration of Mars. This paper will discuss the key results of the MPPG and P-SAG efforts to update and refine our understanding of the Strategic Knowledge Gaps (SKGs) required to successfully conduct human Mars missions.

  14. Exploiting Child-Robot Aesthetic Interaction for a Social Robot

    OpenAIRE

    Lee, Jae-Joon; Kim, Dae-Won; Kang, Bo-Yeong

    2012-01-01

    A social robot interacts and communicates with humans by using the embodied knowledge gained from interactions with its social environment. In recent years, emotion has emerged as a popular concept for designing social robots. Several studies on social robots reported an increase in robot sociability through emotional imitative interactions between the robot and humans. In this paper conventional emotional interactions are extended by exploiting the aesthetic theories that the sociability of ...

  15. Fuzzy variable impedance control based on stiffness identification for human-robot cooperation

    Science.gov (United States)

    Mao, Dachao; Yang, Wenlong; Du, Zhijiang

    2017-06-01

    This paper presents a dynamic fuzzy variable impedance control algorithm for human-robot cooperation. In order to estimate the intention of human for co-manipulation, a fuzzy inference system is set up to adjust the impedance parameter. Aiming at regulating the output fuzzy universe based on the human arm’s stiffness, an online stiffness identification method is developed. A drag interaction task is conducted on a 5-DOF robot with variable impedance control. Experimental results demonstrate that the proposed algorithm is superior.

  16. Tactical mobile robots for urban search and rescue

    Science.gov (United States)

    Blitch, John; Sidki, Nahid; Durkin, Tim

    2000-07-01

    Few disasters can inspire more compassion for victims and families than those involving structural collapse. Video clips of children's bodies pulled from earthquake stricken cities and bombing sties tend to invoke tremendous grief and sorrow because of the totally unpredictable nature of the crisis and lack of even the slightest degree of negligence (such as with those who choose to ignore storm warnings). Heartbreaking stories of people buried alive for days provide a visceral and horrific perspective of some of greatest fears ever to be imagined by human beings. Current trends toward urban sprawl and increasing human discord dictates that structural collapse disasters will continue to present themselves at an alarming rate. The proliferation of domestic terrorism, HAZMAT and biological contaminants further complicates the matter further and presents a daunting problem set for Urban Search and Rescue (USAR) organizations around the world. This paper amplifies the case for robot assisted search and rescue that was first presented during the KNOBSAR project initiated at the Colorado School of Mines in 1995. It anticipates increasing technical development in mobile robot technologies and promotes their use for a wide variety of humanitarian assistance missions. Focus is placed on development of advanced robotic systems that are employed in a complementary tool-like fashion as opposed to traditional robotic approaches that portend to replace humans in hazardous tasks. Operational challenges for USAR are presented first, followed by a brief history of mobiles robot development. The paper then presents conformal robotics as a new design paradigm with emphasis on variable geometry and volumes. A section on robot perception follows with an initial attempt to characterize sensing in a volumetric manner. Collaborative rescue is then briefly discussed with an emphasis on marsupial operations and linked mobility. The paper concludes with an emphasis on Human Robot Interface

  17. Bio-mechanical Analysis of Human Joints and Extension of the Study to Robot

    OpenAIRE

    S. Parasuraman; Ler Shiaw Pei

    2008-01-01

    In this paper, the bio-mechanical analysis of human joints is carried out and the study is extended to the robot manipulator. This study will first focus on the kinematics of human arm which include the movement of each joint in shoulder, wrist, elbow and finger complexes. Those analyses are then extended to the design of a human robot manipulator. A simulator is built for Direct Kinematics and Inverse Kinematics of human arm. In the simulation of Direct Kinematics, the human joint angles can...

  18. Cognitive Human-Machine Interface Applied in Remote Support for Industrial Robot Systems

    Directory of Open Access Journals (Sweden)

    Tomasz Kosicki

    2013-10-01

    Full Text Available An attempt is currently being made to widely introduce industrial robots to Small-Medium Enterprises (SMEs. Since the enterprises usually employ too small number of robot units to afford specialized departments for robot maintenance, they must be provided with inexpensive and immediate support remotely. This paper evaluates whether the support can be provided by means of Cognitive Info-communication – communication in which human cognitive capabilities are extended irrespectively of geographical distances. The evaluations are given with an aid of experimental system that consists of local and remote rooms, which are physically separated – a six-degree-of-freedom NACHI SH133-03 industrial robot is situated in the local room, while the operator, who supervises the robot by means of audio-visual Cognitive Human-Machine Interface, is situated in the remote room. The results of simple experiments show that Cognitive Info-communication is not only efficient mean to provide the support remotely, but is probably also a powerful tool to enhance interaction with any data-rich environment that require good conceptual understanding of system's state and careful attention management. Furthermore, the paper discusses data presentation and reduction methods for data-rich environments, as well as introduces the concepts of Naturally Acquired Data and Cognitive Human-Machine Interfaces.

  19. Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action

    Science.gov (United States)

    Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra

    2014-01-01

    Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212

  20. In our own image? Emotional and neural processing differences when observing human-human vs human-robot interactions.

    Science.gov (United States)

    Wang, Yin; Quadflieg, Susanne

    2015-11-01

    Notwithstanding the significant role that human-robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human-human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal-parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. © The Author (2015). Published by Oxford University Press.

  1. An Intelligent Agent-Controlled and Robot-Based Disassembly Assistant

    Science.gov (United States)

    Jungbluth, Jan; Gerke, Wolfgang; Plapper, Peter

    2017-09-01

    One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate “the problem of loosening a connection and the distribution of the work” as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.

  2. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  3. Are You Talking to Me? Dialogue Systems Supporting Mixed Teams of Humans and Robots

    Science.gov (United States)

    Dowding, John; Clancey, William J.; Graham, Jeffrey

    2006-01-01

    This position paper describes an approach to building spoken dialogue systems for environments containing multiple human speakers and hearers, and multiple robotic speakers and hearers. We address the issue, for robotic hearers, of whether the speech they hear is intended for them, or more likely to be intended for some other hearer. We will describe data collected during a series of experiments involving teams of multiple human and robots (and other software participants), and some preliminary results for distinguishing robot-directed speech from human-directed speech. The domain of these experiments is Mars-analogue planetary exploration. These Mars-analogue field studies involve two subjects in simulated planetary space suits doing geological exploration with the help of 1-2 robots, supporting software agents, a habitat communicator and links to a remote science team. The two subjects are performing a task (geological exploration) which requires them to speak with each other while also speaking with their assistants. The technique used here is to use a probabilistic context-free grammar language model in the speech recognizer that is trained on prior robot-directed speech. Intuitively, the recognizer will give higher confidence to an utterance if it is similar to utterances that have been directed to the robot in the past.

  4. How do walkers behave when crossing the way of a mobile robot that replicates human interaction rules?

    Science.gov (United States)

    Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien

    2018-02-01

    Previous studies showed the existence of implicit interaction rules shared by human walkers when crossing each other. Especially, each walker contributes to the collision avoidance task and the crossing order, as set at the beginning, is preserved along the interaction. This order determines the adaptation strategy: the first arrived increases his/her advance by slightly accelerating and changing his/her heading, whereas the second one slows down and moves in the opposite direction. In this study, we analyzed the behavior of human walkers crossing the trajectory of a mobile robot that was programmed to reproduce this human avoidance strategy. In contrast with a previous study, which showed that humans mostly prefer to give the way to a non-reactive robot, we observed similar behaviors between human-human avoidance and human-robot avoidance when the robot replicates the human interaction rules. We discuss this result in relation with the importance of controlling robots in a human-like way in order to ease their cohabitation with humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Human Factors Consideration for the Design of Collaborative Machine Assistants

    Science.gov (United States)

    Park, Sung; Fisk, Arthur D.; Rogers, Wendy A.

    Recent improvements in technology have facilitated the use of robots and virtual humans not only in entertainment and engineering but also in the military (Hill et al., 2003), healthcare (Pollack et al., 2002), and education domains (Johnson, Rickel, & Lester, 2000). As active partners of humans, such machine assistants can take the form of a robot or a graphical representation and serve the role of a financial assistant, a health manager, or even a social partner. As a result, interactive technologies are becoming an integral component of people's everyday lives.

  6. Intelligent robotics can boost America's economic growth

    Science.gov (United States)

    Erickson, Jon D.

    1994-01-01

    A case is made for strategic investment in intelligent robotics as a part of the solution to the problem of improved global competitiveness for U.S. manufacturing, a critical industrial sector. Similar cases are made for strategic investments in intelligent robotics for field applications, construction, and service industries such as health care. The scope of the country's problems and needs is beyond the capability of the private sector alone, government alone, or academia alone to solve independently of the others. National cooperative programs in intelligent robotics are needed with the private sector supplying leadership direction and aerospace and non-aerospace industries conducting the development. Some necessary elements of such programs are outlined. The National Aeronautics and Space Administration (NASA) and the Lyndon B. Johnson Space Center (JSC) can be key players in such national cooperative programs in intelligent robotics for several reasons: (1) human space exploration missions require supervised intelligent robotics as enabling tools and, hence must develop supervised intelligent robotic systems; (2) intelligent robotic technology is being developed for space applications at JSC (but has a strong crosscutting or generic flavor) that is advancing the state of the art and is producing both skilled personnel and adaptable developmental infrastructure such as integrated testbeds; and (3) a NASA JSC Technology Investment Program in Robotics has been proposed based on commercial partnerships and collaborations for precompetitive, dual-use developments.

  7. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures.

    Directory of Open Access Journals (Sweden)

    Thierry Chaminade

    2010-07-01

    Full Text Available The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents.Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted.Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance.Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions.Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.

  8. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures.

    Science.gov (United States)

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-07-21

    The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.

  9. FY 1998 Report on research and development project. Research and development of human-cooperative/coexisting robot systems; 1998 nendo ningen kyocho kyozongata robot system kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    This R and D project is aimed at development of the human-cooperative/coexisting robot systems with high safety and reliability, capable of performing complicated works cooperatively and in a coexisting manner with humans in human working and living spaces, in order to help improve safety and efficiency in various industrial areas, improve services and convenience in manufacturing and service areas, and create new industries. The trend surveys cover humanoid robot systems, remote control systems and simulators, and the application surveys cover services for humans, basic humanoids and entertainment communication. The 1998 R and D efforts include research and development, fabrication and surveys for the following themes; (1) fabrication of robot platforms for supporting manual works, (2) development of surrounded visual display systems, (3) development of robot arm manipulation and force displaying systems, (4) development of a dynamic simulator, (5) development of a distributed software platform, (6) researches and development of computation algorithm for kinematic chain dynamics, (7) development of motion teaching system for multi-functional robots, (8) investigation of trends in robotics technology, and (9) researches and surveys of robot application. (NEDO)

  10. A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033

    Science.gov (United States)

    Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.

    2012-01-01

    The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars

  11. Human-Agent Teaming for Multi-Robot Control: A Literature Review

    Science.gov (United States)

    2013-02-01

    advent of the Goggle driverless car , autonomous farm equipment, and unmanned commercial aircraft (Mosher, 2012). The inexorable trend towards...because a robot cannot be automated to navigate in difficult terrain. However, this high ratio will not be sustainable if large numbers of autonomous ...Parasuraman et al., 2007). 3.5 RoboLeader Past research indicates that autonomous cooperation between robots can improve the performance of the human

  12. Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)

    Science.gov (United States)

    Baroff, Lynn; Dischinger, Charlie; Fitts, David

    2009-01-01

    Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.

  13. Dragons, Ladybugs, and Softballs: Girls' STEM Engagement with Human-Centered Robotics

    Science.gov (United States)

    Gomoll, Andrea; Hmelo-Silver, Cindy E.; Šabanovic, Selma; Francisco, Matthew

    2016-01-01

    Early experiences in science, technology, engineering, and math (STEM) are important for getting youth interested in STEM fields, particularly for girls. Here, we explore how an after-school robotics club can provide informal STEM experiences that inspire students to engage with STEM in the future. Human-centered robotics, with its emphasis on the…

  14. Quantifying and Maximizing Performance of a Human-Centric Robot under Precision, Safety, and Robot Specification Constraints

    Data.gov (United States)

    National Aeronautics and Space Administration — The research project is an effort towards achieving 99.99% safety of mobile robots working alongside humans while matching the precision performance of industrial...

  15. A Meta-Analysis of Factors Influencing the Development of Human-Robot Trust

    Science.gov (United States)

    2011-12-01

    culture accounts for significant differences in trust ratings for robots; some collectivist cultures have higher trust ratings than individualistic ...HRI Occurs Other potential factors impacting trust in HRI are directly related to the environment in which HRI occurs. For example, the cultural ... cultures (Li et al., 2010). Our SMEs also indicated that team collaboration issues (e.g., communication, shared mental models) and tasking

  16. Designing Emotionally Expressive Robots

    DEFF Research Database (Denmark)

    Tsiourti, Christiana; Weiss, Astrid; Wac, Katarzyna

    2017-01-01

    Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid...... robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper -a highly human-like robot and Hobbit – a robot...... for robots....

  17. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    OpenAIRE

    Hooman Samani; Elham Saadatian; Natalie Pang; Doros Polydorou; Owen Noel Newton Fernando; Ryohei Nakatsu; Jeffrey Tzu Kwan Valino Koh

    2013-01-01

    In this paper, we have investigated the concept of “Cultural Robotics” with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. Ac...

  18. Gestalt Processing in Human-Robot Interaction: A Novel Account for Autism Research

    Directory of Open Access Journals (Sweden)

    Maya Dimitrova

    2015-12-01

    Full Text Available The paper presents a novel analysis focused on showing that education is possible through robotic enhancement of the Gestalt processing in children with autism, which is not comparable to alternative educational methods such as demonstration and instruction provided solely by human tutors. The paper underlines the conceptualization of cognitive processing of holistic representations traditionally named in psychology as Gestalt structures, emerging in the process of human-robot interaction in educational settings. Two cognitive processes are proposed in the present study - bounding and unfolding - and their role in Gestalt emergence is outlined. The proposed theoretical approach explains novel findings of autistic perception and gives guidelines for design of robot-assistants to the rehabilitation process.

  19. Open core control software for surgical robots.

    Science.gov (United States)

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several

  20. Sensing human hand motions for controlling dexterous robots

    Science.gov (United States)

    Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.

    1988-01-01

    The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.

  1. A CORBA-Based Control Architecture for Real-Time Teleoperation Tasks in a Developmental Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Hanafiah Yussof

    2011-06-01

    Full Text Available This paper presents the development of new Humanoid Robot Control Architecture (HRCA platform based on Common Object Request Broker Architecture (CORBA in a developmental biped humanoid robot for real-time teleoperation tasks. The objective is to make the control platform open for collaborative teleoperation research in humanoid robotics via the internet. Meanwhile, to generate optimal trajectory generation in bipedal walk, we proposed a real time generation of optimal gait by using Genetic Algorithms (GA to minimize the energy for humanoid robot gait. In addition, we proposed simplification of kinematical solutions to generate controlled trajectories of humanoid robot legs in teleoperation tasks. The proposed control systems and strategies was evaluated in teleoperation experiments between Australia and Japan using humanoid robot Bonten-Maru. Additionally, we have developed a user-friendly Virtual Reality (VR user interface that is composed of ultrasonic 3D mouse system and a Head Mounted Display (HMD for working coexistence of human and humanoid robot in teleoperation tasks. The teleoperation experiments show good performance of the proposed system and control, and also verified the good performance for working coexistence of human and humanoid robot.

  2. A CORBA-Based Control Architecture for Real-Time Teleoperation Tasks in a Developmental Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Hanafiah Yussof

    2011-06-01

    Full Text Available This paper presents the development of new Humanoid Robot Control Architecture (HRCA platform based on Common Object Request Broker Architecture (CORBA in a developmental biped humanoid robot for real‐time teleoperation tasks. The objective is to make the control platform open for collaborative teleoperation research in humanoid robotics via the internet. Meanwhile, to generate optimal trajectory generation in bipedal walk, we proposed a real time generation of optimal gait by using Genetic Algorithms (GA to minimize the energy for humanoid robot gait. In addition, we proposed simplification of kinematical solutions to generate controlled trajectories of humanoid robot legs in teleoperation tasks. The proposed control systems and strategies was evaluated in teleoperation experiments between Australia and Japan using humanoid robot Bonten‐Maru. Additionally, we have developed a user‐ friendly Virtual Reality (VR user interface that is composed of ultrasonic 3D mouse system and a Head Mounted Display (HMD for working coexistence of human and humanoid robot in teleoperation tasks. The teleoperation experiments show good performance of the proposed system and control, and also verified the good performance for working coexistence of human and humanoid robot.

  3. Human-Robot Teaming for Hydrologic Data Gathering at Multiple Scales

    Science.gov (United States)

    Peschel, J.; Young, S. N.

    2017-12-01

    The use of personal robot-assistive technology by researchers and practitioners for hydrologic data gathering has grown in recent years as barriers to platform capability, cost, and human-robot interaction have been overcome. One consequence to this growth is a broad availability of unmanned platforms that might or might not be suitable for a specific hydrologic investigation. Through multiple field studies, a set of recommendations has been developed to help guide novice through experienced users in choosing the appropriate unmanned platforms for a given application. This talk will present a series of hydrologic data sets gathered using a human-robot teaming approach that has leveraged unmanned aerial, ground, and surface vehicles over multiple scales. The field case studies discussed will be connected to the best practices, also provided in the presentation. This talk will be of interest to geoscience researchers and practitioners, in general, as well as those working in fields related to emerging technologies.

  4. Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction

    OpenAIRE

    Renner, Patrick; Pfeiffer, Thies

    2014-01-01

    For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular in...

  5. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  6. Region of eye contact of humanoid Nao robot is similar to that of a human

    NARCIS (Netherlands)

    Cuijpers, R.H.; Pol, van der D.; Herrmann, G.; Pearson, M.J.; Lenz, A.; Bremner, P.; Spiers, A.; Leonards, U.

    2013-01-01

    Eye contact is an important social cue in human-human interaction, but it is unclear how easily it carries over to humanoid robots. In this study we investigated whether the tolerance of making eye contact is similar for the Nao robot as compared to human lookers. We measured the region of eye

  7. INTEGRATED ROBOT-HUMAN CONTROL IN MINING OPERATIONS

    Energy Technology Data Exchange (ETDEWEB)

    George Danko

    2005-04-01

    This report contains a detailed description of the work conducted in the first year of the project on Integrated Robot-Human Control in Mining Operations at University of Nevada, Reno. This project combines human operator control with robotic control concepts to create a hybrid control architecture, in which the strengths of each control method are combined to increase machine efficiency and reduce operator fatigue. The kinematics reconfiguration type differential control of the excavator implemented with a variety of ''software machine kinematics'' is the key feature of the project. This software re-configured excavator is more desirable to execute a given digging task. The human operator retains the master control of the main motion parameters, while the computer coordinates the repetitive movement patterns of the machine links. These repetitive movements may be selected from a pre-defined family of trajectories with different transformations. The operator can make adjustments to this pattern in real time, as needed, to accommodate rapidly-changing environmental conditions. A Bobcat{reg_sign} 435 excavator was retrofitted with electro-hydraulic control valve elements. The modular electronic control was tested and the basic valve characteristics were measured for each valve at the Robotics Laboratory at UNR. Position sensors were added to the individual joint control actuators, and the sensors were calibrated. An electronic central control system consisting of a portable computer, converters and electronic driver components was interfaced to the electro-hydraulic valves and position sensors. The machine is operational with or without the computer control system depending on whether the computer interface is on or off. In preparation for emulated mining tasks tests, typical, repetitive tool trajectories during surface mining operations were recorded at the Newmont Mining Corporation's ''Lone Tree'' mine in Nevada.

  8. Towards collaboration between professional caregivers and robots - A preliminary study

    OpenAIRE

    Malaisé , Adrien; Nertomb , Sophie; Charpillet , François; Ivaldi , Serena

    2016-01-01

    International audience; In this paper, we address the question of which potential use of a robot in a health-care environment is imagined by people that are not experts in robotics, and how these people imagine to teach new movements to a robot. We report on the preliminary results of our investigation , in which we conducted 40 interviews with non-experts in robotics and a focus group with professional caregivers.

  9. Common Metrics for Human-Robot Interaction

    Science.gov (United States)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  10. Semiotics and Human-Robot Interaction

    OpenAIRE

    Sequeira, Joao; Ribeiro, M.Isabel

    2007-01-01

    The social barriers that still constrain the use of robots in modern societies will tend to vanish with the sophistication increase of interaction strategies. Communication and interaction between people and robots occurring in a friendly manner and being accessible to everyone, independent of their skills in robotics issues, will certainly foster the breaking of barriers. Socializing behaviors, such as following people, are relatively easy to obtain with current state of the art robotics. Ho...

  11. Human-robot cooperative movement training: Learning a novel sensory motor transformation during walking with robotic assistance-as-needed

    Directory of Open Access Journals (Sweden)

    Benitez Raul

    2007-03-01

    Full Text Available Abstract Background A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Methods Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. Results We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. Conclusion The assist

  12. Human-robot cooperative movement training: learning a novel sensory motor transformation during walking with robotic assistance-as-needed.

    Science.gov (United States)

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-03-28

    A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a

  13. Robot Teachers

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Ess, Charles Melvin; Bhroin, Niamh Ni

    The world's first robot teacher, Saya, was introduced to a classroom in Japan in 2009. Saya, had the appearance of a young female teacher. She could express six basic emotions, take the register and shout orders like 'be quiet' (The Guardian, 2009). Since 2009, humanoid robot technologies have...... developed. It is now suggested that robot teachers may become regular features in educational settings, and may even 'take over' from human teachers in ten to fifteen years (cf. Amundsen, 2017 online; Gohd, 2017 online). Designed to look and act like a particular kind of human; robot teachers mediate human...... existence and roles, while also aiming to support education through sophisticated, automated, human-like interaction. Our paper explores the design and existential implications of ARTIE, a robot teacher at Oxford Brookes University (2017, online). Drawing on an initial empirical exploration we propose...

  14. Advanced robot locomotion.

    Energy Technology Data Exchange (ETDEWEB)

    Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry; Feddema, John Todd; Spletzer, Barry Louis; Rose, Scott E.; Novick, David Keith; Wilson, David Gerald; Buerger, Stephen P.

    2007-01-01

    This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics natural human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.

  15. Classifying a Person's Degree of Accessibility From Natural Body Language During Social Human-Robot Interactions.

    Science.gov (United States)

    McColl, Derek; Jiang, Chuan; Nejat, Goldie

    2017-02-01

    For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.

  16. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    Czech Academy of Sciences Publication Activity Database

    Alexandrov, A.V.; Lippi, V.; Mergner, T.; Frolov, A. A.; Hettich, G.; Húsek, Dušan

    2017-01-01

    Roč. 11, 25 April (2017), č. článku 22. ISSN 1662-5188 Institutional support: RVO:67985807 Keywords : human sensorimotor system * neuromechanics * biorobotics * motor control * eigenmovements Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Robotics and automatic control Impact factor: 1.821, year: 2016

  17. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    Directory of Open Access Journals (Sweden)

    Thomas Mergner

    2018-05-01

    Full Text Available Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1 four basic physical disturbances (support surface (SS tilt and translation, field and contact forces may affect the balancing in any given degree of freedom (DoF. Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2 Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3 Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking

  18. The coming revolution in personal care robotics: what does it mean for nurses?

    Science.gov (United States)

    Sharts-Hopko, Nancy C

    2014-01-01

    The business sector provides regular reportage on the development of personal care robots to enable elders and people with disabilities to remain in their homes. Technology in this area is advancing rapidly in Asia, Europe, and North America. To date, the nursing literature has not addressed how nurses will assist these vulnerable populations in the selection and use of robotic technology or how robotics could effect nursing care and patient outcomes. This article provides an overview of development in the area of personal care robotics to address societal needs reflecting demographic trends. Selected relevant issues related to the human-robotic interface including ethical concerns are identified. Implications for nursing education and the delivery of nursing services are identified. Collaboration with engineers in the development of personal care robotic technology has the potential to contribute to the creation of products that optimally address the needs of elders and people with disabilities.

  19. Humanlike Robots - The Upcoming Revolution in Robotics

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2009-01-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  20. Humanlike robots: the upcoming revolution in robotics

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2009-08-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  1. Robot friendship: Can a robot be a friend?

    DEFF Research Database (Denmark)

    Emmeche, Claus

    2014-01-01

    Friendship is used here as a conceptual vehicle for framing questions about the distinctiveness of human cognition in relation to natural systems such as other animal species and to artificial systems such as robots. By exploring this very common form of a human interpersonal relationship......, the author indicates that even though it is difficult to say something generally true about friendship among humans, distinct forms of friendship as practiced and distinct notions of friendship have been investigated in the social and human sciences and in biology. A more general conceptualization...... of friendship as a triadic relation analogous to the sign relation is suggested. Based on this the author asks how one may conceive of robot-robot and robot-human friendships; and how an interdisciplinary perspective upon that relation can contribute to analyse levels of embodied cognition in natural...

  2. Visual servo control for a human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-03-01

    Full Text Available This thesis presents work completed on the design of control and vision components for use in a monocular vision-based human-following robot. The use of vision in a controller feedback loop is referred to as vision-based or visual servo control...

  3. Neuro-robotics from brain machine interfaces to rehabilitation robotics

    CERN Document Server

    Artemiadis

    2014-01-01

    Neuro-robotics is one of the most multidisciplinary fields of the last decades, fusing information and knowledge from neuroscience, engineering and computer science. This book focuses on the results from the strategic alliance between Neuroscience and Robotics that help the scientific community to better understand the brain as well as design robotic devices and algorithms for interfacing humans and robots. The first part of the book introduces the idea of neuro-robotics, by presenting state-of-the-art bio-inspired devices. The second part of the book focuses on human-machine interfaces for pe

  4. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  5. Making planned paths look more human-like in humanoid robot manipulation planning

    DEFF Research Database (Denmark)

    Zacharias, F.; Schlette, C.; Schmidt, F.

    2011-01-01

    It contradicts the human's expectations when humanoid robots move awkwardly during manipulation tasks. The unnatural motion may be caused by awkward start or goal configurations or by probabilistic path planning processes that are often used. This paper shows that the choice of an arm's target...... for the robot arm....

  6. Understanding Older Adult's Perceptions of Factors that Support Trust in Human and Robot Care Providers.

    Science.gov (United States)

    Stuck, Rachel E; Rogers, Wendy A

    2017-06-01

    As the population of older adults increase so will the need for care providers, both human and robot. Trust is a key aspect to establish and maintain a successful older adult-care provider relationship. However, due to trust volatility it is essential to understand it within specific contexts. This proposed mixed methods study will explore what dimensions of trust emerge as important within the human-human and human-robot dyads in older adults and care providers. First, this study will help identify key qualities that support trust in a care provider relationship. By understanding what older adults perceive as needing to trust humans and robots for various care tasks, we can begin to provide recommendations based on user expectations for design to support trust.

  7. Revolutionizing collaboration through e-work, e-business, and e-service

    CERN Document Server

    Nof, Shimon Y; Jeong, Wootae; Moghaddam, Mohsen

    2015-01-01

    Collaboration in highly distributed organizations of people, robots, and autonomous systems is and must be revolutionized by engineering augmentation. The aim is to augment humans’ abilities at work and, through this augmentation, improve organizations’ abilities to accomplish their missions. This book establishes the theoretical foundations and design principles of collaborative e-Work, e-Business and e-Service, their models and applications, design and implementation techniques. The fundamental premise is that without effective e-Work and e-Services, the potential of emerging activities, such as e-Commerce, virtual manufacturing, tele-robotic medicine, automated construction, smart energy grid, cyber-supported agriculture, and intelligent transportation cannot be fully materialized. Typically, workers and managers of such value networks are frustrated with complex information systems, originally designed and built to simplify and improve performance. Even if the human-computer interface for such systems...

  8. Socially grounded game strategy enhances bonding and perceived smartness of a humanoid robot

    Science.gov (United States)

    Barakova, E. I.; De Haas, M.; Kuijpers, W.; Irigoyen, N.; Betancourt, A.

    2018-01-01

    In search for better technological solutions for education, we adapted a principle from economic game theory, namely that giving a help will promote collaboration and eventually long-term relations between a robot and a child. This principle has been shown to be effective in games between humans and between humans and computer agents. We compared the social and cognitive engagement of children when playing checkers game combined with a social strategy against a robot or against a computer. We found that by combining the social and game strategy the children (average age of 8.3 years) had more empathy and social engagement with the robot since the children did not want to necessarily win against it. This finding is promising for using social strategies for the creation of long-term relations between robots and children and making educational tasks more engaging. An additional outcome of the study was the significant difference in the perception of the children about the difficulty of the game - the game with the robot was seen as more challenging and the robot - as a smarter opponent. This finding might be due to the higher perceived or expected intelligence from the robot, or because of the higher complexity of seeing patterns in three-dimensional world.

  9. An Experimental Study of Embodied Interaction and Human Perception of Social Presence for Interactive Robots in Public Settings

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Heath, Damith; Vlachos, Evgenios

    2018-01-01

    The human perception of cognitive robots as social depends on many factors, including those that do not necessarily pertain to a robot’s cognitive functioning. Experience Design offers a useful framework for evaluating when participants interact with robots as products or tools and when they regard...... them as social actors. This study describes a between-participants experiment conducted at a science museum, where visitors were invited to play a game of noughts and crosses with a Baxter robot. The goal is to foster meaningful interactions that promote engagement between the human and robot...... in a museum context. Using an Experience Design framework, we tested the robot in three different conditions to better understand which factors contribute to the perception of robots as social. The experiment also outlines best practices for conducting human-robot interaction research in museum exhibitions...

  10. Using Human Gestures and Generic Skills to Instruct a Mobile Robot Arm in a Feeder Filling Scenario

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Rath; Høilund, Carsten; Krüger, Volker

    2012-01-01

    Mobile robots that have the ability to cooperate with humans are able to provide new possibilities to manufac- turing industries. In this paper, we discuss our mobile robot arm that a) can provide assistance at different locations in a factory and b) that can be programmed using complex human...... actions such as pointing in Take this object. In this paper, we discuss the use of the mobile robot for a feeding scenario where a human operator specifies the parts and the feeders through pointing gestures. The system is partially built using generic robotic skills. Through extensive experiments, we...

  11. Human-automation collaboration in manufacturing: identifying key implementation factors

    OpenAIRE

    Charalambous, George; Fletcher, Sarah; Webb, Philip

    2013-01-01

    Human-automation collaboration refers to the concept of human operators and intelligent automation working together interactively within the same workspace without conventional physical separation. This concept has commanded significant attention in manufacturing because of the potential applications, such as the installation of large sub-assemblies. However, the key human factors relevant to human-automation collaboration have not yet been fully investigated. To maximise effective implement...

  12. Distributed, Collaborative Human-Robotic Networks for Outdoor Experiments in Search, Identify and Track

    Science.gov (United States)

    2011-01-11

    and its variance σ2Ûi are determined. Ûi = ûi + Pu,EN (PEN )−1 [( Ejc Njc ) − ( êi n̂i )] (15) σ2 Ûi = Pui − P u,EN i ( PENi )−1 PEN,ui (16) where...screen; the operator can click a robot’s camera view to select it as the Focus Robot. The Focus Robot’s camera stream is enlarged and displayed in the

  13. Children Perseverate to a Human's Actions but Not to a Robot's Actions

    Science.gov (United States)

    Moriguchi, Yusuke; Kanda, Takayuki; Ishiguro, Hiroshi; Itakura, Shoji

    2010-01-01

    Previous research has shown that young children commit perseverative errors from their observation of another person's actions. The present study examined how social observation would lead children to perseverative tendencies, using a robot. In Experiment 1, preschoolers watched either a human model or a robot sorting cards according to one…

  14. Affective and behavioral responses to robot-initiated social touch : Towards understanding the opportunities and limitations of physical contact in human-robot interaction

    NARCIS (Netherlands)

    Willemse, C.J.A.M.; Toet, A.; Erp, J.B.F. van

    2017-01-01

    Social touch forms an important aspect of the human non-verbal communication repertoire, but is often overlooked in human–robot interaction. In this study, we investigated whether robot-initiated touches can induce physiological, emotional, and behavioral responses similar to those reported for

  15. Towards an Open Software Platform for Field Robots in Precision Agriculture

    DEFF Research Database (Denmark)

    Jensen, Kjeld; Larsen, Morten; Nielsen, Søren H

    2014-01-01

    Robotics in precision agriculture has the potential to improve competitiveness and increase sustainability compared to current crop production methods and has become an increasingly active area of research. Tractor guidance systems for supervised navigation and implement control have reached...... the market, and prototypes of field robots performing precision agriculture tasks without human intervention also exist. But research in advanced cognitive perception and behaviour that is required to enable a more efficient, reliable and safe autonomy becomes increasingly demanding due to the growing...... software complexity. A lack of collaboration between research groups contributes to the problem. Scientific publications describe methods and results from the work, but little field robot software is released and documented for others to use. We hypothesize that a common open software platform tailored...

  16. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  17. Robots as Confederates

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    2016-01-01

    This paper addresses the use of robots in experimental research for the study of human language, human interaction, and human nature. It is argued that robots make excellent confederates that can be completely controlled, yet which engage human participants in interactions that allow us to study...... numerous linguistic and psychological variables in isolation in an ecologically valid way. Robots thus combine the advantages of observational studies and of controlled experimentation....

  18. Mini AERCam Inspection Robot for Human Space Missions

    Science.gov (United States)

    Fredrickson, Steven E.; Duran, Steve; Mitchell, Jennifer D.

    2004-01-01

    The Engineering Directorate of NASA Johnson Space Center has developed a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam free flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35 pound, 14 inch AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations including automatic stationkeeping and point-to-point maneuvering. Mini AERCam is designed to fulfill the unique requirements and constraints associated with using a free flyer to perform external inspections and remote viewing of human spacecraft operations. This paper describes the application of Mini AERCam for stand-alone spacecraft inspection, as well as for roles on teams of humans and robots conducting future space exploration missions.

  19. Control of the seven-degree-of-freedom upper limb exoskeleton for an improved human-robot interface

    Science.gov (United States)

    Kim, Hyunchul; Kim, Jungsuk

    2017-04-01

    This study analyzes a practical scheme for controlling an exoskeleton robot with seven degrees of freedom (DOFs) that supports natural movements of the human arm. A redundant upper limb exoskeleton robot with seven DOFs is mechanically coupled to the human body such that it becomes a natural extension of the body. If the exoskeleton robot follows the movement of the human body synchronously, the energy exchange between the human and the robot will be reduced significantly. In order to achieve this, the redundancy of the human arm, which is represented by the swivel angle, should be resolved using appropriate constraints and applied to the robot. In a redundant 7-DOF upper limb exoskeleton, the pseudoinverse of the Jacobian with secondary objective functions is widely used to resolve the redundancy that defines the desired joint angles. A secondary objective function requires the desired joint angles for the movement of the human arm, and the angles are estimated by maximizing the projection of the longest principle axis of the manipulability ellipsoid for the human arm onto the virtual destination toward the head region. Then, they are fed into the muscle model with a relative damping to achieve more realistic robot-arm movements. Various natural arm movements are recorded using a motion capture system, and the actual swivel-angle is compared to that estimated using the proposed swivel angle estimation algorithm. The results indicate that the proposed algorithm provides a precise reference for estimating the desired joint angle with an error less than 5°.

  20. The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving

    Science.gov (United States)

    Okita, Sandra

    2015-01-01

    Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…

  1. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    Science.gov (United States)

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  2. Learning compliant manipulation through kinesthetic and tactile human-robot interaction.

    Science.gov (United States)

    Kronander, Klas; Billard, Aude

    2014-01-01

    Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective.

  3. You Can Leave Your Head On: Attention Management and Turn-Taking in Multi-party Interaction with a Virtual Human/Robot Duo

    NARCIS (Netherlands)

    Linssen, Jeroen; Berkhoff, Meike; Bode, Max; Rens, Eduard; Theune, Mariet; Wiltenburg, Daan; Beskow, Jonas; Peters, Christopher; Castellano, Ginevra; O'Sullivan, Carol; Leite, Iolanda; Kopp, Stefan

    In two small studies, we investigated how a virtual human/ robot duo can complement each other in joint interaction with one or more users. The robot takes care of turn management while the virtual human draws attention to the robot. Our results show that having the virtual human address the robot,

  4. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction

    Directory of Open Access Journals (Sweden)

    Abdulaziz Abubshait

    2017-08-01

    Full Text Available Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human–robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human–robot interaction. We examine this question by manipulating agent appearance (human vs. robot and behavior (reliable vs. random within the same paradigm and examine how congruent (human/reliable vs. robot/random versus incongruent (human/random vs. robot/reliable combinations of these triggers affect performance (i.e., gaze following and attitudes (i.e., agent ratings in human–robot interaction. The results show that both appearance and behavior affect human–robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human–robot interaction are discussed.

  5. Negative Affect in Human Robot Interaction

    DEFF Research Database (Denmark)

    Rehm, Matthias; Krogsager, Anders

    2013-01-01

    The vision of social robotics sees robots moving more and more into unrestricted social environments, where robots interact closely with users in their everyday activities, maybe even establishing relationships with the user over time. In this paper we present a field trial with a robot in a semi...

  6. Robotic hand project

    OpenAIRE

    Karaçizmeli, Cengiz; Çakır, Gökçe; Tükel, Dilek

    2014-01-01

    In this work, the mechatronic based robotic hand is controlled by the position data taken from the glove which has flex sensors mounted to capture finger bending of the human hand. The angular movement of human hand’s fingers are perceived and processed by a microcontroller, and the robotic hand is controlled by actuating servo motors. It has seen that robotic hand can simulate the movement of the human hand that put on the glove, during tests have done. This robotic hand can be used not only...

  7. Automation and robotics human performance

    Science.gov (United States)

    Mah, Robert W.

    1990-01-01

    The scope of this report is limited to the following: (1) assessing the feasibility of the assumptions for crew productivity during the intra-vehicular activities and extra-vehicular activities; (2) estimating the appropriate level of automation and robotics to accomplish balanced man-machine, cost-effective operations in space; (3) identifying areas where conceptually different approaches to the use of people and machines can leverage the benefits of the scenarios; and (4) recommending modifications to scenarios or developing new scenarios that will improve the expected benefits. The FY89 special assessments are grouped into the five categories shown in the report. The high level system analyses for Automation & Robotics (A&R) and Human Performance (HP) were performed under the Case Studies Technology Assessment category, whereas the detailed analyses for the critical systems and high leverage development areas were performed under the appropriate operations categories (In-Space Vehicle Operations or Planetary Surface Operations). The analysis activities planned for the Science Operations technology areas were deferred to FY90 studies. The remaining activities such as analytic tool development, graphics/video demonstrations and intelligent communicating systems software architecture were performed under the Simulation & Validations category.

  8. Designing a Social Environment for Human-Robot Cooperation.

    Science.gov (United States)

    Amram, Fred M.

    Noting that work is partly a social activity, and that workers' psychological and emotional needs influence their productivity, this paper explores avenues for improving human-robot cooperation and for enhancing worker satisfaction in the environment of flexible automation. The first section of the paper offers a brief overview of the…

  9. First Application of Robot Teaching in an Existing Industry 4.0 Environment: Does It Really Work?

    Directory of Open Access Journals (Sweden)

    Astrid Weiss

    2016-07-01

    Full Text Available This article reports three case studies on the usability and acceptance of an industrial robotic prototype in the context of human-robot cooperation. The three case studies were conducted in the framework of a two-year project named AssistMe, which aims at developing different means of interaction for programming and using collaborative robots in a user-centered manner. Together with two industrial partners and a technological partner, two different application scenarios were implemented and studied with an off-the-shelf robotic system. The operators worked with the robotic prototype in laboratory conditions (two days, in a factory context (one day and in an automotive assembly line (three weeks. In the article, the project and procedures are described in detail, including the quantitative and qualitative methodology. Our results show that close human-robot cooperation in the industrial context needs adaptive pacing mechanisms in order to avoid a change of working routines for the operators and that an off-the-shelf robotic system is still limited in terms of usability and acceptance. The touch panel, which is needed for controlling the robot, had a negative impact on the overall user experience. It creates a further intermediate layer between the user, the robot and the work piece and potentially leads to a decrease in productivity. Finally, the fear of the worker of being replaced by an improved robotic system was regularly expressed and adds an additional anthropocentric dimension to the discussion of human-robot cooperation, smart factories and the upcoming Industry 4.0.

  10. Effect of a human-type communication robot on cognitive function in elderly women living alone.

    Science.gov (United States)

    Tanaka, Masaaki; Ishii, Akira; Yamano, Emi; Ogikubo, Hiroki; Okazaki, Masatsugu; Kamimura, Kazuro; Konishi, Yasuharu; Emoto, Shigeru; Watanabe, Yasuyoshi

    2012-09-01

    Considering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone. In this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing. The Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group. This study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.

  11. Interacting With Robots to Investigate the Bases of Social Interaction.

    Science.gov (United States)

    Sciutti, Alessandra; Sandini, Giulio

    2017-12-01

    Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our

  12. The ultimatum game as measurement tool for anthropomorphism in human-robot interaction

    NARCIS (Netherlands)

    Torta, E.; Dijk, van E.T.; Ruijten, P.A.M.; Cuijpers, R.H.; Herrmann, G.; Pearson, M.J.; Lenz, A.; et al., xx

    2013-01-01

    Anthropomorphism is the tendency to attribute human characteristics to non–human entities. This paper presents exploratory work to evaluate how human responses during the ultimatum game vary according to the level of anthropomorphism of the opponent, which was either a human, a humanoid robot or a

  13. Effects of Interruptibility-Aware Robot Behavior

    OpenAIRE

    Banerjee, Siddhartha; Silva, Andrew; Feigh, Karen; Chernova, Sonia

    2018-01-01

    As robots become increasingly prevalent in human environments, there will inevitably be times when a robot needs to interrupt a human to initiate an interaction. Our work introduces the first interruptibility-aware mobile robot system, and evaluates the effects of interruptibility-awareness on human task performance, robot task performance, and on human interpretation of the robot's social aptitude. Our results show that our robot is effective at predicting interruptibility at high accuracy, ...

  14. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human-Robot Interaction

    Science.gov (United States)

    2014-07-01

    however, 27 of these articles had insufficient information to calculate effect size. Authors were contacted via email and were given 5 weeks to... Multitasking Personality Robot Personality Communication Mode States Team Collaboration Fatigue Capability In-group Membership Stress

  15. Human-Robot Interaction: Intention Recognition and Mutual Entrainment

    Science.gov (United States)

    2012-08-18

    bility, but only the human arm is modeled, with linear, low-pass-filter type transfer functions [16]. The coupled dynamics in pHRI has been intensively ...be located inside or on the edge of the polygon. IV. DISCUSSION A. Issues in Implementing LIPM on a Mobile Robot Instead of focusing on kinesiology

  16. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.

    Science.gov (United States)

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which

  17. Knowing me knowing you: Exploring effects of culture and context on perception of robot personality

    NARCIS (Netherlands)

    Weiss, A.; van Dijk, Elisabeth M.A.G.; Evers, Vanessa

    We carry out a set of experiments to assess collaboration between human users and robots in a cross-cultural setting. This paper describes the study design and deployment of a video-based study to investigate task-dependence and cultural-background dependence of the personality trait attribution on

  18. Raven-II: an open platform for surgical robotics research.

    Science.gov (United States)

    Hannaford, Blake; Rosen, Jacob; Friedman, Diana W; King, Hawkeye; Roan, Phillip; Cheng, Lei; Glozman, Daniel; Ma, Ji; Kosari, Sina Nia; White, Lee

    2013-04-01

    The Raven-II is a platform for collaborative research on advances in surgical robotics. Seven universities have begun research using this platform. The Raven-II system has two 3-DOF spherical positioning mechanisms capable of attaching interchangeable four DOF instruments. The Raven-II software is based on open standards such as Linux and ROS to maximally facilitate software development. The mechanism is robust enough for repeated experiments and animal surgery experiments, but is not engineered to sufficient safety standards for human use. Mechanisms in place for interaction among the user community and dissemination of results include an electronic forum, an online software SVN repository, and meetings and workshops at major robotics conferences.

  19. Humanlike Robots - Synthetically Mimicking Humans

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2012-01-01

    Nature inspired many inventions and the field of technology that is based on the mimicking or inspiration of nature is widely known as Biomimetics and it is increasingly leading to many new capabilities. There are numerous examples of biomimetic successes including the copying of fins for swimming, and the inspiration of the insects and birds flight. More and more commercial implementations of biomimetics are appearing and behaving lifelike and applications are emerging that are important to our daily life. Making humanlike robots is the ultimate challenge to biomimetics and, for many years, it was considered science fiction, but such robots are becoming an engineering reality. Advances in producing such robot are allowing them to perform impressive functions and tasks. The development of such robots involves addressing many challenges and is raising concerns that are related to fear of their application implications and potential ethical issues. In this paper, the state-of-the-art of humanlike robots, potential applications and challenges will be reviewed.

  20. Tele-operated search robot for human detection using histogram of oriented objects

    Science.gov (United States)

    Cruz, Febus Reidj G.; Avendaño, Glenn O.; Manlises, Cyrel O.; Avellanosa, James Jason G.; Abina, Jyacinth Camille F.; Masaquel, Albert M.; Siapno, Michael Lance O.; Chung, Wen-Yaw

    2017-02-01

    Disasters such as typhoons, tornadoes, and earthquakes are inevitable. Aftermaths of these disasters include the missing people. Using robots with human detection capabilities to locate the missing people, can dramatically reduce the harm and risk to those who work in such circumstances. This study aims to: design and build a tele-operated robot; implement in MATLAB an algorithm for the detection of humans; and create a database of human identification based on various positions, angles, light intensity, as well as distances from which humans will be identified. Different light intensities were made by using Photoshop to simulate smoke, dust and water drops conditions. After processing the image, the system can indicate either a human is detected or not detected. Testing with bodies covered was also conducted to test the algorithm's robustness. Based on the results, the algorithm can detect humans with full body shown. For upright and lying positions, detection can happen from 8 feet to 20 feet. For sitting position, detection can happen from 2 feet to 20 feet with slight variances in results because of different lighting conditions. The distances greater than 20 feet, no humans can be processed or false negatives can occur. For bodies covered, the algorithm can detect humans in cases made under given circumstances. On three positions, humans can be detected from 0 degrees to 180 degrees under normal, with smoke, with dust, and with water droplet conditions. This study was able to design and build a tele-operated robot with MATLAB algorithm that can detect humans with an overall precision of 88.30%, from which a database was created for human identification based on various conditions, where humans will be identified.

  1. Human-robot interaction tests on a novel robot for gait assistance.

    Science.gov (United States)

    Tagliamonte, Nevio Luigi; Sergi, Fabrizio; Carpino, Giorgio; Accoto, Dino; Guglielmelli, Eugenio

    2013-06-01

    This paper presents tests on a treadmill-based non-anthropomorphic wearable robot assisting hip and knee flexion/extension movements using compliant actuation. Validation experiments were performed on the actuators and on the robot, with specific focus on the evaluation of intrinsic backdrivability and of assistance capability. Tests on a young healthy subject were conducted. In the case of robot completely unpowered, maximum backdriving torques were found to be in the order of 10 Nm due to the robot design features (reduced swinging masses; low intrinsic mechanical impedance and high-efficiency reduction gears for the actuators). Assistance tests demonstrated that the robot can deliver torques attracting the subject towards a predicted kinematic status.

  2. Durable Tactile Glove for Human or Robot Hand

    Science.gov (United States)

    Butzer, Melissa; Diftler, Myron A.; Huber, Eric

    2010-01-01

    A glove containing force sensors has been built as a prototype of tactile sensor arrays to be worn on human hands and anthropomorphic robot hands. The force sensors of this glove are mounted inside, in protective pockets; as a result of this and other design features, the present glove is more durable than earlier models.

  3. Reflex control of robotic gait using human walking data.

    Directory of Open Access Journals (Sweden)

    Catherine A Macleod

    Full Text Available Control of human walking is not thoroughly understood, which has implications in developing suitable strategies for the retraining of a functional gait following neurological injuries such as spinal cord injury (SCI. Bipedal robots allow us to investigate simple elements of the complex nervous system to quantify their contribution to motor control. RunBot is a bipedal robot which operates through reflexes without using central pattern generators or trajectory planning algorithms. Ground contact information from the feet is used to activate motors in the legs, generating a gait cycle visually similar to that of humans. Rather than developing a more complicated biologically realistic neural system to control the robot's stepping, we have instead further simplified our model by measuring the correlation between heel contact and leg muscle activity (EMG in human subjects during walking and from this data created filter functions transferring the sensory data into motor actions. Adaptive filtering was used to identify the unknown transfer functions which translate the contact information into muscle activation signals. Our results show a causal relationship between ground contact information from the heel and EMG, which allows us to create a minimal, linear, analogue control system for controlling walking. The derived transfer functions were applied to RunBot II as a proof of concept. The gait cycle produced was stable and controlled, which is a positive indication that the transfer functions have potential for use in the control of assistive devices for the retraining of an efficient and effective gait with potential applications in SCI rehabilitation.

  4. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  5. Adaptive training algorithm for robot-assisted upper-arm rehabilitation, applicable to individualised and therapeutic human-robot interaction.

    Science.gov (United States)

    Chemuturi, Radhika; Amirabdollahian, Farshid; Dautenhahn, Kerstin

    2013-09-28

    Rehabilitation robotics is progressing towards developing robots that can be used as advanced tools to augment the role of a therapist. These robots are capable of not only offering more frequent and more accessible therapies but also providing new insights into treatment effectiveness based on their ability to measure interaction parameters. A requirement for having more advanced therapies is to identify how robots can 'adapt' to each individual's needs at different stages of recovery. Hence, our research focused on developing an adaptive interface for the GENTLE/A rehabilitation system. The interface was based on a lead-lag performance model utilising the interaction between the human and the robot. The goal of the present study was to test the adaptability of the GENTLE/A system to the performance of the user. Point-to-point movements were executed using the HapticMaster (HM) robotic arm, the main component of the GENTLE/A rehabilitation system. The points were displayed as balls on the screen and some of the points also had a real object, providing a test-bed for the human-robot interaction (HRI) experiment. The HM was operated in various modes to test the adaptability of the GENTLE/A system based on the leading/lagging performance of the user. Thirty-two healthy participants took part in the experiment comprising of a training phase followed by the actual-performance phase. The leading or lagging role of the participant could be used successfully to adjust the duration required by that participant to execute point-to-point movements, in various modes of robot operation and under various conditions. The adaptability of the GENTLE/A system was clearly evident from the durations recorded. The regression results showed that the participants required lower execution times with the help from a real object when compared to just a virtual object. The 'reaching away' movements were longer to execute when compared to the 'returning towards' movements irrespective of the

  6. Vitruvian Robot

    DEFF Research Database (Denmark)

    Hasse, Cathrine

    2017-01-01

    future. A real version of Ava would not last long in a human world because she is basically a solipsist, who does not really care about humans. She cannot co-create the line humans walk along. The robots created as ‘perfect women’ (sex robots) today are very far from the ideal image of Ava...

  7. Observation and imitation of actions performed by humans, androids, and robots: an EMG study

    Science.gov (United States)

    Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.

    2015-01-01

    Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action

  8. Movement Performance of Human-Robot Cooperation Control Based on EMG-Driven Hill-Type and Proportional Models for an Ankle Power-Assist Exoskeleton Robot.

    Science.gov (United States)

    Ao, Di; Song, Rong; Gao, JinWu

    2017-08-01

    Although the merits of electromyography (EMG)-based control of powered assistive systems have been certified, the factors that affect the performance of EMG-based human-robot cooperation, which are very important, have received little attention. This study investigates whether a more physiologically appropriate model could improve the performance of human-robot cooperation control for an ankle power-assist exoskeleton robot. To achieve the goal, an EMG-driven Hill-type neuromusculoskeletal model (HNM) and a linear proportional model (LPM) were developed and calibrated through maximum isometric voluntary dorsiflexion (MIVD). The two control models could estimate the real-time ankle joint torque, and HNM is more accurate and can account for the change of the joint angle and muscle dynamics. Then, eight healthy volunteers were recruited to wear the ankle exoskeleton robot and complete a series of sinusoidal tracking tasks in the vertical plane. With the various levels of assist based on the two calibrated models, the subjects were instructed to track the target displayed on the screen as accurately as possible by performing ankle dorsiflexion and plantarflexion. Two measurements, the root mean square error (RMSE) and root mean square jerk (RMSJ), were derived from the assistant torque and kinematic signals to characterize the movement performances, whereas the amplitudes of the recorded EMG signals from the tibialis anterior (TA) and the gastrocnemius (GAS) were obtained to reflect the muscular efforts. The results demonstrated that the muscular effort and smoothness of tracking movements decreased with an increase in the assistant ratio. Compared with LPM, subjects made lower physical efforts and generated smoother movements when using HNM, which implied that a more physiologically appropriate model could enable more natural and human-like human-robot cooperation and has potential value for improvement of human-exoskeleton interaction in future applications.

  9. Mars - The relationship of robotic and human elements in the IAA International Exploration of Mars study

    Science.gov (United States)

    Marov, Mikhail YA.; Duke, Michael B.

    1993-01-01

    The roles of human and robotic missions in Mars exploration are defined in the context of the short- and long-term Mars programs. In particular, it is noted that the currently implemented and planned missions to Mars can be regarded as robotic precursor missions to human exploration. Attention is given to factors that must be considered in formulating the rationale for human flights to Mars and future human Mars settlements and justifying costly projects.

  10. Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations

    Science.gov (United States)

    Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana

    2011-01-01

    We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.

  11. The Virtual Robotics Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Kress, R.L.; Love, L.J.

    1999-09-01

    The growth of the Internet has provided a unique opportunity to expand research collaborations between industry, universities, and the national laboratories. The Virtual Robotics Laboratory (VRL) is an innovative program at Oak Ridge National Laboratory (ORNL) that is focusing on the issues related to collaborative research through controlled access of laboratory equipment using the World Wide Web. The VRL will provide different levels of access to selected ORNL laboratory secondary education programs. In the past, the ORNL Robotics and Process Systems Division has developed state-of-the-art robotic systems for the Army, NASA, Department of Energy, Department of Defense, as well as many other clients. After proof of concept, many of these systems sit dormant in the laboratories. This is not out of completion of all possible research topics. but from completion of contracts and generation of new programs. In the past, a number of visiting professors have used this equipment for their own research. However, this requires that the professor, and possibly his/her students, spend extended periods at the laboratory facility. In addition, only a very exclusive group of faculty can gain access to the laboratory and hardware. The VRL is a tool that enables extended collaborative efforts without regard to geographic limitations.

  12. The Virtual Robotics Laboratory

    International Nuclear Information System (INIS)

    Kress, R.L.; Love, L.J.

    1997-01-01

    The growth of the Internet has provided a unique opportunity to expand research collaborations between industry, universities, and the national laboratories. The Virtual Robotics Laboratory (VRL) is an innovative program at Oak Ridge National Laboratory (ORNL) that is focusing on the issues related to collaborative research through controlled access of laboratory equipment using the World Wide Web. The VRL will provide different levels of access to selected ORNL laboratory equipment to outside universities, industrial researchers, and elementary and secondary education programs. In the past, the ORNL Robotics and Process Systems Division (RPSD) has developed state-of-the-art robotic systems for the Army, NASA, Department of Energy, Department of Defense, as well as many other clients. After proof of concept, many of these systems sit dormant in the laboratories. This is not out of completion of all possible research topics, but from completion of contracts and generation of new programs. In the past, a number of visiting professors have used this equipment for their own research. However, this requires that the professor, and possibly his students, spend extended periods at the laboratory facility. In addition, only a very exclusive group of faculty can gain access to the laboratory and hardware. The VRL is a tool that enables extended collaborative efforts without regard to geographic limitations

  13. Human Robotic Systems (HRS): Controlling Robots over Time Delay Element

    Data.gov (United States)

    National Aeronautics and Space Administration — This element involves the development of software that enables easier commanding of a wide range of NASA relevant robots through the Robot Application Programming...

  14. Making Humanoid Robots More Acceptable Based on the Study of Robot Characters in Animation

    Directory of Open Access Journals (Sweden)

    Fatemeh Maleki

    2015-03-01

    Full Text Available In this paper we take an approach in Humanoid Robots are not considered as robots who resembles human beings in a realistic way of appearance and act but as robots who act and react like human that make them more believable by people. Regarding this approach we will study robot characters in animation movies and discuss what makes some of them to be accepted just like a moving body and what makes some other robot characters to be believable as a living human. The goal of this paper is to create a rule set that describes friendly, socially acceptable, kind, cute... robots and in this study we will review example robots in popular animated movies. The extracted rules and features can be used for making real robots more acceptable.

  15. Towards an Open Software Platform for Field Robots in Precision Agriculture

    Directory of Open Access Journals (Sweden)

    Kjeld Jensen

    2014-06-01

    Full Text Available Robotics in precision agriculture has the potential to improve competitiveness and increase sustainability compared to current crop production methods and has become an increasingly active area of research. Tractor guidance systems for supervised navigation and implement control have reached the market, and prototypes of field robots performing precision agriculture tasks without human intervention also exist. But research in advanced cognitive perception and behaviour that is required to enable a more efficient, reliable and safe autonomy becomes increasingly demanding due to the growing software complexity. A lack of collaboration between research groups contributes to the problem. Scientific publications describe methods and results from the work, but little field robot software is released and documented for others to use. We hypothesize that a common open software platform tailored to field robots in precision agriculture will significantly decrease development time and resources required to perform experiments due to efficient reuse of existing work across projects and robot platforms. In this work we present the FroboMind software platform and evaluate the performance when applied to precision agriculture tasks.

  16. INTEGRATED ROBOT-HUMAN CONTROL IN MINING OPERATIONS

    Energy Technology Data Exchange (ETDEWEB)

    George Danko

    2006-04-01

    This report describes the results of the 2nd year of a research project on the implementation of a novel human-robot control system for hydraulic machinery. Sensor and valve re-calibration experiments were conducted to improve open loop machine control. A Cartesian control example was tested both in simulation and on the machine; the results are discussed in detail. The machine tests included open-loop as well as closed-loop motion control. Both methods worked reasonably well, due to the high-quality electro-hydraulic valves used on the experimental machine. Experiments on 3-D analysis of the bucket trajectory using marker tracking software are also presented with the results obtained. Open-loop control is robustly stable and free of short-term dynamic problems, but it allows for drifting away from the desired motion kinematics of the machine. A novel, closed-loop control adjustment provides a remedy, while retaining much of the advantages of the open-loop control based on kinematics transformation. Additional analysis of previously recorded, three-dimensional working trajectories of the bucket of large mine shovels was completed. The motion patterns, when transformed into a family of curves, serve as the basis for software-controlled machine kinematics transformation in the new human-robot control system.

  17. 5th International Conference on Automation, Robotics and Applications (ICARA 2011)

    CERN Document Server

    Bailey, Donald; Demidenko, Serge; Carnegie, Dale; Recent Advances in Robotics and Automation

    2013-01-01

    There isn’t a facet of human life that has not been touched and influenced by robots and automation. What makes robots and machines versatile is their computational intelligence. While modern intelligent sensors and powerful hardware capabilities have given a huge fillip to the growth of intelligent machines, the progress in the development of algorithms for smart interaction, collaboration and pro-activeness will result in the next quantum jump. This book deals with the recent advancements in design methodologies, algorithms and implementation techniques to incorporate intelligence in robots and automation systems. Several articles deal with navigation, localization and mapping of mobile robots, a problem that engineers and researchers are grappling with all the time. Fuzzy logic, neural networks and neuro-fuzzy based techniques for real world applications have been detailed in a few articles. This edited volume is targeted to present the latest state-of-the-art computational intelligence techniques in Rob...

  18. Human capital gains associated with robotic assisted laparoscopic pyeloplasty in children compared to open pyeloplasty.

    Science.gov (United States)

    Behan, James W; Kim, Steve S; Dorey, Frederick; De Filippo, Roger E; Chang, Andy Y; Hardy, Brian E; Koh, Chester J

    2011-10-01

    Robotic assisted laparoscopic pyeloplasty is an emerging, minimally invasive alternative to open pyeloplasty in children for ureteropelvic junction obstruction. The procedure is associated with smaller incisions and shorter hospital stays. To our knowledge previous outcome analyses have not included human capital calculations, especially regarding loss of parental workdays. We compared perioperative factors in patients who underwent robotic assisted laparoscopic and open pyeloplasty at a single institution, especially in regard to human capital changes, in an institutional cost analysis. A total of 44 patients 2 years old or older from a single institution underwent robotic assisted (37) or open (7) pyeloplasty from 2008 to 2010. We retrospectively reviewed the charts to collect demographic and perioperative data. The human capital approach was used to calculate parental productivity losses. Patients who underwent robotic assisted laparoscopic pyeloplasty had a significantly shorter average hospital length of stay (1.6 vs 2.8 days, p human capital gains, eg decreased lost parental wages, and lower hospitalization expenses. Future comparative outcome analyses in children should include financial factors such as human capital loss, which can be especially important for families with young children. Copyright © 2011 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  19. Springer handbook of robotics

    CERN Document Server

    Khatib, Oussama

    2016-01-01

    The second edition of this handbook provides a state-of-the-art cover view on the various aspects in the rapidly developing field of robotics. Reaching for the human frontier, robotics is vigorously engaged in the growing challenges of new emerging domains. Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives. The credible prospect of practical robots among humans is the result of the scientific endeavour of a half a century of robotic developments that established robotics as a modern scientific discipline. The ongoing vibrant expansion and strong growth of the field during the last decade has fueled this second edition of the Springer Handbook of Robotics. The first edition of the handbook soon became a landmark in robotics publishing and won the American Association of Publishers PROSE Award for Excellence in Physical Sciences & Mathematics as well as the organization’s Award for Engineering & Technology. The second edition o...

  20. Validation of a robotic balance system for investigations in the control of human standing balance.

    Science.gov (United States)

    Luu, Billy L; Huryn, Thomas P; Van der Loos, H F Machiel; Croft, Elizabeth A; Blouin, Jean-Sébastien

    2011-08-01

    Previous studies have shown that human body sway during standing approximates the mechanics of an inverted pendulum pivoted at the ankle joints. In this study, a robotic balance system incorporating a Stewart platform base was developed to provide a new technique to investigate the neural mechanisms involved in standing balance. The robotic system, programmed with the mechanics of an inverted pendulum, controlled the motion of the body in response to a change in applied ankle torque. The ability of the robotic system to replicate the load properties of standing was validated by comparing the load stiffness generated when subjects balanced their own body to the robot's mechanical load programmed with a low (concentrated-mass model) or high (distributed-mass model) inertia. The results show that static load stiffness was not significantly (p > 0.05) different for standing and the robotic system. Dynamic load stiffness for the robotic system increased with the frequency of sway, as predicted by the mechanics of an inverted pendulum, with the higher inertia being accurately matched to the load properties of the human body. This robotic balance system accurately replicated the physical model of standing and represents a useful tool to simulate the dynamics of a standing person. © 2011 IEEE

  1. Estimation of Physical Human-Robot Interaction Using Cost-Effective Pneumatic Padding

    Directory of Open Access Journals (Sweden)

    André Wilkening

    2016-08-01

    Full Text Available The idea to use a cost-effective pneumatic padding for sensing of physical interaction between a user and wearable rehabilitation robots is not new, but until now there has not been any practical relevant realization. In this paper, we present a novel method to estimate physical human-robot interaction using a pneumatic padding based on artificial neural networks (ANNs. This estimation can serve as rough indicator of applied forces/torques by the user and can be applied for visual feedback about the user’s participation or as additional information for interaction controllers. Unlike common mostly very expensive 6-axis force/torque sensors (FTS, the proposed sensor system can be easily integrated in the design of physical human-robot interfaces of rehabilitation robots and adapts itself to the shape of the individual patient’s extremity by pressure changing in pneumatic chambers, in order to provide a safe physical interaction with high user’s comfort. This paper describes a concept of using ANNs for estimation of interaction forces/torques based on pressure variations of eight customized air-pad chambers. The ANNs were trained one-time offline using signals of a high precision FTS which is also used as reference sensor for experimental validation. Experiments with three different subjects confirm the functionality of the concept and the estimation algorithm.

  2. Simulation tools for robotics research and assessment

    Science.gov (United States)

    Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.

    2016-05-01

    The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component

  3. Interaction with Soft Robotic Tentacles

    DEFF Research Database (Denmark)

    Jørgensen, Jonas

    2018-01-01

    Soft robotics technology has been proposed for a number of applications that involve human-robot interaction. In this tabletop demonstration it is possible to interact with two soft robotic platforms that have been used in human-robot interaction experiments (also accepted to HRI'18 as a Late...

  4. The Power of Educational Robotics

    Science.gov (United States)

    Cummings, Timothy

    The purpose of this action research project was to investigate the impact a students' participation in educational robotics has on his or her performance in the STEM subjects. This study attempted to utilize educational robotics as a method for increasing student achievement and engagement in STEM subjects. Over the course of 12 weeks, an after-school robotics program was offered to students. Guided by the standards and principles of VEX IQ, a leading resource in educational robotics, students worked in collaboration on creating a design for their robot, building and testing their robot, and competing in the VEX IQ Crossover Challenge. Student data was gathered through a pre-participation survey, observations from the work they performed in robotics club, their performance in STEM subject classes, and the analysis of their end-of-the-year report card. Results suggest that the students who participate in robotics club experienced a positive impact on their performance in STEM subject classes.

  5. Preliminary Framework for Human-Automation Collaboration

    International Nuclear Information System (INIS)

    Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Spielman, Zachary Alexander

    2015-01-01

    The Department of Energy's Advanced Reactor Technologies Program sponsors research, development and deployment activities through its Next Generation Nuclear Plant, Advanced Reactor Concepts, and Advanced Small Modular Reactor (aSMR) Programs to promote safety, technical, economical, and environmental advancements of innovative Generation IV nuclear energy technologies. The Human Automation Collaboration (HAC) Research Project is located under the aSMR Program, which identifies developing advanced instrumentation and controls and human-machine interfaces as one of four key research areas. It is expected that the new nuclear power plant designs will employ technology significantly more advanced than the analog systems in the existing reactor fleet as well as utilizing automation to a greater extent. Moving towards more advanced technology and more automation does not necessary imply more efficient and safer operation of the plant. Instead, a number of concerns about how these technologies will affect human performance and the overall safety of the plant need to be addressed. More specifically, it is important to investigate how the operator and the automation work as a team to ensure effective and safe plant operation, also known as the human-automation collaboration (HAC). The focus of the HAC research is to understand how various characteristics of automation (such as its reliability, processes, and modes) effect an operator's use and awareness of plant conditions. In other words, the research team investigates how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. This report addresses the Department of Energy milestone M4AT-15IN2302054, Complete Preliminary Framework for Human-Automation Collaboration, by discussing the two phased development of a preliminary HAC framework. The framework developed in the first phase was used as

  6. Preliminary Framework for Human-Automation Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spielman, Zachary Alexander [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The Department of Energy’s Advanced Reactor Technologies Program sponsors research, development and deployment activities through its Next Generation Nuclear Plant, Advanced Reactor Concepts, and Advanced Small Modular Reactor (aSMR) Programs to promote safety, technical, economical, and environmental advancements of innovative Generation IV nuclear energy technologies. The Human Automation Collaboration (HAC) Research Project is located under the aSMR Program, which identifies developing advanced instrumentation and controls and human-machine interfaces as one of four key research areas. It is expected that the new nuclear power plant designs will employ technology significantly more advanced than the analog systems in the existing reactor fleet as well as utilizing automation to a greater extent. Moving towards more advanced technology and more automation does not necessary imply more efficient and safer operation of the plant. Instead, a number of concerns about how these technologies will affect human performance and the overall safety of the plant need to be addressed. More specifically, it is important to investigate how the operator and the automation work as a team to ensure effective and safe plant operation, also known as the human-automation collaboration (HAC). The focus of the HAC research is to understand how various characteristics of automation (such as its reliability, processes, and modes) effect an operator’s use and awareness of plant conditions. In other words, the research team investigates how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. This report addresses the Department of Energy milestone M4AT-15IN2302054, Complete Preliminary Framework for Human-Automation Collaboration, by discussing the two phased development of a preliminary HAC framework. The framework developed in the first phase was used as the

  7. Tele-operated service robots : ROSE

    NARCIS (Netherlands)

    Osch, van M.P.W.J.; Bera, D.; Hee, van K.M.; Koks, Y.; Zeegers, H.

    2014-01-01

    Service robots are robots that are intended to perform tasks normally done by humans in an environment in which humans work as well. However, they are neither required to accomplish these tasks in the same way as humans nor need to look like a human being. A tele-operated robot is controlled from a

  8. I Show You How I Like You: Emotional Human-Robot Interaction through Facial Expression and Tactile Stimulation

    DEFF Research Database (Denmark)

    Canamero, Dolores; Fredslund, Jacob

    2001-01-01

    We report work on a LEGO robot that displays different emotional expressions in response to physical stimulation, for the purpose of social interaction with humans. This is a first step toward our longer-term goal of exploring believable emotional exchanges to achieve plausible interaction...... with a simple robot. Drawing inspiration from theories of human basic emotions, we implemented several prototypical expressions in the robot's caricatured face and conducted experiments to assess the recognizability of these expressions...

  9. Effects of eye contact and iconic gestures on message retention in human-robot interaction

    NARCIS (Netherlands)

    Dijk, van E.T.; Torta, E.; Cuijpers, R.H.

    2013-01-01

    The effects of iconic gestures and eye contact on message retention in human-robot interaction were investigated in a series of experiments. A humanoid robot gave short verbal messages to participants, accompanied either by iconic gestures or no gestures while making eye contact with the participant

  10. Impressions of Humanness for Android Robot May Represent an Endophenotype for Autism Spectrum Disorders

    Science.gov (United States)

    Kumazaki, Hirokazu; Warren, Zachary; Swanson, Amy; Yoshikawa, Yuichiro; Matsumoto, Yoshio; Ishiguro, Hiroshi; Sarkar, Nilanjan; Minabe, Yoshio; Kikuchi, Mitsuru

    2018-01-01

    Identification of meaningful endophenotypes may be critical to unraveling the etiology and pathophysiology of autism spectrum disorders (ASD). We investigated whether impressions of "humanness" for android robot might represent a candidate characteristic of an ASD endophenotype. We used a female type of android robot with an appearance…

  11. Observation and Imitation of Actions Performed by Humans, Androids and Robots: An EMG study

    Directory of Open Access Journals (Sweden)

    Galit eHofree

    2015-06-01

    Full Text Available Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One key question this approach enables is what aspects of similarity between the observer and the observed agent facilitate motor simulation? Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are tuned to process other biological entities. In this study, we used humanoid robots with different degrees of humanlikeness in appearance and motion along with electromyography (EMG to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion, a Robot (mechanical appearance and motion and an Android (biological appearance, mechanical motion. Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying

  12. Improving collaborative play between children with autism spectrum disorders and their siblings : the effectiveness of a robot-mediated intervention based on lego (R) therapy

    NARCIS (Netherlands)

    Huskens, Bibi; Palmen, Annemiek; Van der Werff, Marije; Lourens, Tino; Barakova, Emilia

    2015-01-01

    The aim of the study was to investigate the effectiveness of a brief robot-mediated intervention based on Lego(A (R)) therapy on improving collaborative behaviors (i.e., interaction initiations, responses, and play together) between children with ASD and their siblings during play sessions, in a

  13. Fundamentals of soft robot locomotion.

    Science.gov (United States)

    Calisti, M; Picardi, G; Laschi, C

    2017-05-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).

  14. The relation between people's attitudes and anxiety towards robots in human-robot interaction

    NARCIS (Netherlands)

    de Graaf, M.M.A.; Ben Allouch, Soumaya

    2013-01-01

    This paper examines the relation between an interaction with a robot and peoples’ attitudes and emotion towards robots. In our study, participants have had an acquaintance talk with a social robot and both their general attitude and anxiety towards social robots were measured before and after the

  15. Turn-Taking Based on Information Flow for Fluent Human-Robot Interaction

    OpenAIRE

    Thomaz, Andrea L.; Chao, Crystal

    2011-01-01

    Turn-taking is a fundamental part of human communication. Our goal is to devise a turn-taking framework for human-robot interaction that, like the human skill, represents something fundamental about interaction, generic to context or domain. We propose a model of turn-taking, and conduct an experiment with human subjects to inform this model. Our findings from this study suggest that information flow is an integral part of human floor-passing behavior. Following this, we implement autonomous ...

  16. Using Language Games as a Way to Investigate Interactional Engagement in Human-Robot Interaction

    DEFF Research Database (Denmark)

    Jensen, L. C.

    2016-01-01

    how students' engagement with a social robot can be systematically investigated and evaluated. For this purpose, I present a small user study in which a robot plays a word formation game with a human, in which engagement is determined by means of an analysis of the 'language games' played...

  17. Collaborative Assembly Operation between Two Modular Robots Based on the Optical Position Feedback

    Directory of Open Access Journals (Sweden)

    Liying Su

    2009-01-01

    Full Text Available This paper studies the cooperation between two master-slave modular robots. A cooperative robot system is set up with two modular robots and a dynamic optical meter-Optotrak. With Optotrak, the positions of the end effectors are measured as the optical position feedback, which is used to adjust the robots' end positions. A tri-layered motion controller is designed for the two cooperative robots. The RMRC control method is adopted to adjust the master robot to the desired position. With the kinematics constraints of the two robots including position and pose, joint velocity, and acceleration constraints, the two robots can cooperate well. A bolt and nut assembly experiment is executed to verify the methods.

  18. Learning Human Aspects of Collaborative Software Development

    Science.gov (United States)

    Hadar, Irit; Sherman, Sofia; Hazzan, Orit

    2008-01-01

    Collaboration has become increasingly widespread in the software industry as systems have become larger and more complex, adding human complexity to the technological complexity already involved in developing software systems. To deal with this complexity, human-centric software development methods, such as Extreme Programming and other agile…

  19. Multi-Locomotion Robotic Systems New Concepts of Bio-inspired Robotics

    CERN Document Server

    Fukuda, Toshio; Sekiyama, Kosuke; Aoyama, Tadayoshi

    2012-01-01

    Nowadays, multiple attention have been paid on a robot working in the human living environment, such as in the field of medical, welfare, entertainment and so on. Various types of researches are being conducted actively in a variety of fields such as artificial intelligence, cognitive engineering, sensor- technology, interfaces and motion control. In the future, it is expected to realize super high functional human-like robot by integrating technologies in various fields including these types of researches. The book represents new developments and advances in the field of bio-inspired robotics research introducing the state of the art, the idea of multi-locomotion robotic system to implement the diversity of animal motion. It covers theoretical and computational aspects of Passive Dynamic Autonomous Control (PDAC), robot motion control, multi legged walking and climbing as well as brachiation focusing concrete robot systems, components and applications. In addition, gorilla type robot systems are described as...

  20. A neural network-based exploratory learning and motor planning system for co-robots

    Directory of Open Access Journals (Sweden)

    Byron V Galbraith

    2015-07-01

    Full Text Available Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or learning by doing, an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  1. A neural network-based exploratory learning and motor planning system for co-robots.

    Science.gov (United States)

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  2. Hierarchical HMM based learning of navigation primitives for cooperative robotic endovascular catheterization.

    Science.gov (United States)

    Rafii-Tari, Hedyeh; Liu, Jindong; Payne, Christopher J; Bicknell, Colin; Yang, Guang-Zhong

    2014-01-01

    Despite increased use of remote-controlled steerable catheter navigation systems for endovascular intervention, most current designs are based on master configurations which tend to alter natural operator tool interactions. This introduces problems to both ergonomics and shared human-robot control. This paper proposes a novel cooperative robotic catheterization system based on learning-from-demonstration. By encoding the higher-level structure of a catheterization task as a sequence of primitive motions, we demonstrate how to achieve prospective learning for complex tasks whilst incorporating subject-specific variations. A hierarchical Hidden Markov Model is used to model each movement primitive as well as their sequential relationship. This model is applied to generation of motion sequences, recognition of operator input, and prediction of future movements for the robot. The framework is validated by comparing catheter tip motions against the manual approach, showing significant improvements in the quality of catheterization. The results motivate the design of collaborative robotic systems that are intuitive to use, while reducing the cognitive workload of the operator.

  3. Presentation robot Advee

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, Stanislav; Hrbáček, J.; Ripel, T.; Ondroušek, V.; Hrbáček, R.; Schreiber, P.

    2012-01-01

    Roč. 18, 5/6 (2012), s. 307-322 ISSN 1802-1484 Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot * human - robot interface * localization Subject RIV: JD - Computer Applications, Robot ics

  4. Detecting Biological Motion for Human–Robot Interaction: A Link between Perception and Action

    Directory of Open Access Journals (Sweden)

    Alessia Vignolo

    2017-06-01

    Full Text Available One of the fundamental skills supporting safe and comfortable interaction between humans is their capability to understand intuitively each other’s actions and intentions. At the basis of this ability is a special-purpose visual processing that human brain has developed to comprehend human motion. Among the first “building blocks” enabling the bootstrapping of such visual processing is the ability to detect movements performed by biological agents in the scene, a skill mastered by human babies in the first days of their life. In this paper, we present a computational model based on the assumption that such visual ability must be based on local low-level visual motion features, which are independent of shape, such as the configuration of the body and perspective. Moreover, we implement it on the humanoid robot iCub, embedding it into a software architecture that leverages the regularities of biological motion also to control robot attention and oculomotor behaviors. In essence, we put forth a model in which the regularities of biological motion link perception and action enabling a robotic agent to follow a human-inspired sensory-motor behavior. We posit that this choice facilitates mutual understanding and goal prediction during collaboration, increasing the pleasantness and safety of the interaction.

  5. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  6. Introduction of symbiotic human-robot-cooperation in the steel sector: an example of social innovation

    Science.gov (United States)

    Colla, Valentina; Schroeder, Antonius; Buzzelli, Andrea; Abbà, Dario; Faes, Andrea; Romaniello, Lea

    2018-05-01

    The introduction of new technologies, which can support and empower human capabilities in a number of professional tasks while possibly reducing the need for cumbersome operations and the exposure to risk and professional diseases, is nowadays perceived as a must in any industrial field, process industry included. However, despite their relevant potentials, new technologies are not always easy to introduce in the professional environment. A design procedure which takes into account the workers' acceptance, needing and capabilities as well as a continuing education and training process of the personnel who must exploit the innovation, is as fundamental as the technical reliability for the successful introduction of any new technology in a professional environment. An exemplary case is provided by symbiotic human-robot-cooperation. In the steel sector, the difficulties for the implementation of symbiotic human-robot-cooperation is bigger with respect to the manufacturing sector, due to the environmental conditions, which in some cases are not favorable to robots. On the other hand, the opportunities and potential advantages are also greater, as robots could replace human operators in repetitive, heavy tasks, by improving workers' health and safety. The present paper provides an example of the potential and opportunities of human-robot interaction and discusses how this approach can be included in a social innovation paradigm. Moreover, an example will be provided of an ongoing project funded by the Research Fund for Coal and Steel, "ROBOHARSH", which aims at implementing such approach in the steel industry, in order to develop a very sensitive task, i.e. the replacement of the refractory components of the ladle sliding gate.

  7. Action and language integration: from humans to cognitive robots.

    Science.gov (United States)

    Borghi, Anna M; Cangelosi, Angelo

    2014-07-01

    The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction. Copyright © 2014 Cognitive Science Society, Inc.

  8. WebotsTM: Professional Mobile Robot Simulation

    Directory of Open Access Journals (Sweden)

    Olivier Michel

    2008-11-01

    Full Text Available Cyberbotics Ltd. develops WebotsTM, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. WebotsTM lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. WebotsTM has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.

  9. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    Science.gov (United States)

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under

  10. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    Directory of Open Access Journals (Sweden)

    Eva Wiese

    2017-10-01

    Full Text Available Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a fostering feelings of social connection, empathy and prosociality, and by (b enhancing performance on joint human–robot tasks. Lastly, we describe

  11. The Virtual Robotics Laboratory; TOPICAL

    International Nuclear Information System (INIS)

    Kress, R.L.; Love, L.J.

    1999-01-01

    The growth of the Internet has provided a unique opportunity to expand research collaborations between industry, universities, and the national laboratories. The Virtual Robotics Laboratory (VRL) is an innovative program at Oak Ridge National Laboratory (ORNL) that is focusing on the issues related to collaborative research through controlled access of laboratory equipment using the World Wide Web. The VRL will provide different levels of access to selected ORNL laboratory secondary education programs. In the past, the ORNL Robotics and Process Systems Division has developed state-of-the-art robotic systems for the Army, NASA, Department of Energy, Department of Defense, as well as many other clients. After proof of concept, many of these systems sit dormant in the laboratories. This is not out of completion of all possible research topics. but from completion of contracts and generation of new programs. In the past, a number of visiting professors have used this equipment for their own research. However, this requires that the professor, and possibly his/her students, spend extended periods at the laboratory facility. In addition, only a very exclusive group of faculty can gain access to the laboratory and hardware. The VRL is a tool that enables extended collaborative efforts without regard to geographic limitations

  12. Modelling Engagement in Multi-Party Conversations : Data-Driven Approaches to Understanding Human-Human Communication Patterns for Use in Human-Robot Interactions

    OpenAIRE

    Oertel, Catharine

    2016-01-01

    The aim of this thesis is to study human-human interaction in order to provide virtual agents and robots with the capability to engage into multi-party-conversations in a human-like-manner. The focus lies with the modelling of conversational dynamics and the appropriate realization of multi-modal feedback behaviour. For such an undertaking, it is important to understand how human-human communication unfolds in varying contexts and constellations over time. To this end, multi-modal human-human...

  13. Ghost-in-the-Machine reveals human social signals for human–robot interaction

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P.

    2015-01-01

    We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience. PMID:26582998

  14. Pragmatic Frames for Teaching and Learning in Human-Robot Interaction: Review and Challenges.

    Science.gov (United States)

    Vollmer, Anna-Lisa; Wrede, Britta; Rohlfing, Katharina J; Oudeyer, Pierre-Yves

    2016-01-01

    One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning-teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human-human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching.

  15. 'Filigree Robotics'

    DEFF Research Database (Denmark)

    2016-01-01

    -scale 3D printed ceramics accompanied by prints, videos and ceramic probes, which introduce the material and design processes of the project.'Filigree Robotics' experiments with a combination of the traditional ceramic technique of ‘Overforming’ with 3d Laserscan and Robotic extrusion technique...... application of reflectivity after an initial 3d print. The consideration and integration of this material practice into a digital workflow took place in an interdisciplinary collaboration of Ceramicist Flemming Tvede Hansen from KADK Superformlab and architectural researchers from CITA (Martin Tamke, Henrik...... to the creation of the form and invites for experimentation. In Filigree Robotics we combine the crafting of the mold with a parallel running generative algorithm, which is fed by a constant laserscan of the 3d surface. This algorithm, analyses the topology of the mold, identifies high and low points and uses...

  16. Robotic Design Choice Overview using Co-simulation and Design Space Exploration

    DEFF Research Database (Denmark)

    Christiansen, Martin Peter; Larsen, Peter Gorm; Nyholm Jørgensen, Rasmus

    2015-01-01

    . Simulations are used to evaluate the robot model output response in relation to operational demands. An example of a load carrying challenge in relation to the feeding robot is presented and a design space is defined with candidate solutions in both the mechanical and software domains. Simulation results......Rapid robotic system development has created a demand for multi-disciplinary methods and tools to explore and compare design alternatives. In this paper, we present a collaborative modelling technique that combines discrete-event models of controller software with continuous-time models of physical...... robot components. The proposed co-modelling method utilises Vienna Development Method (VDM) and Matlab for discrete-event modelling and 20-sim for continuous-time modelling. The model-based development of a mobile robot mink feeding system is used to illustrate the collaborative modelling method...

  17. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Directory of Open Access Journals (Sweden)

    Kristel Knaepen

    Full Text Available In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support. Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  18. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Science.gov (United States)

    Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain

    2015-01-01

    In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  19. Sampling Based Trajectory Planning for Robots in Dynamic Human Environments

    DEFF Research Database (Denmark)

    Svenstrup, Mikael

    2010-01-01

    Open-ended human environments, such as pedestrian streets, hospital corridors, train stations etc., are places where robots start to emerge. Hence, being able to plan safe and natural trajectories in these dynamic environments is an important skill for future generations of robots. In this work...... the problem is formulated as planning a minimal cost trajectory through a potential field, defined from the perceived position and motion of persons in the environment. A modified Rapidlyexploring Random Tree (RRT) algorithm is proposed as a solution to the planning problem. The algorithm implements a new...... for the uncertainty in the dynamic environment. The planning algorithm is demonstrated in a simulated pedestrian street environment....

  20. Advances in Robotics and Virtual Reality

    CERN Document Server

    Hassanien, Aboul

    2012-01-01

    A beyond human knowledge and reach, robotics is strongly involved in tackling challenges of new emerging multidisciplinary fields. Together with humans, robots are busy exploring and working on the new generation of ideas and problems whose solution is otherwise impossible to find. The future is near when robots will sense, smell and touch people and their lives. Behind this practical aspect of human-robotics, there is a half a century spanned robotics research, which transformed robotics into a modern science. The Advances in Robotics and Virtual Reality is a compilation of emerging application areas of robotics. The book covers robotics role in medicine, space exploration and also explains the role of virtual reality as a non-destructive test bed which constitutes a premise of further advances towards new challenges in robotics. This book, edited by two famous scientists with the support of an outstanding team of fifteen authors, is a well suited reference for robotics researchers and scholars from related ...

  1. Advanced mechanics in robotic systems

    CERN Document Server

    Nava Rodríguez, Nestor Eduardo

    2011-01-01

    Illustrates original and ambitious mechanical designs and techniques for the development of new robot prototypes Includes numerous figures, tables and flow charts Discusses relevant applications in robotics fields such as humanoid robots, robotic hands, mobile robots, parallel manipulators and human-centred robots

  2. Cognitive Emotional Regulation Model in Human-Robot Interaction

    OpenAIRE

    Liu, Xin; Xie, Lun; Liu, Anqi; Li, Dan

    2015-01-01

    This paper integrated Gross cognitive process into the HMM (hidden Markov model) emotional regulation method and implemented human-robot emotional interaction with facial expressions and behaviors. Here, energy was the psychological driving force of emotional transition in the cognitive emotional model. The input facial expression was translated into external energy by expression-emotion mapping. Robot’s next emotional state was determined by the cognitive energy (the stimulus after cognition...

  3. Transferring human impedance regulation skills to robots

    CERN Document Server

    Ajoudani, Arash

    2016-01-01

    This book introduces novel thinking and techniques to the control of robotic manipulation. In particular, the concept of teleimpedance control as an alternative method to bilateral force-reflecting teleoperation control for robotic manipulation is introduced. In teleimpedance control, a compound reference command is sent to the slave robot including both the desired motion trajectory and impedance profile, which are then realized by the remote controller. This concept forms a basis for the development of the controllers for a robotic arm, a dual-arm setup, a synergy-driven robotic hand, and a compliant exoskeleton for improved interaction performance.

  4. Advanced wireless mobile collaborative sensing network for tactical and strategic missions

    Science.gov (United States)

    Xu, Hao

    2017-05-01

    In this paper, an advanced wireless mobile collaborative sensing network will be developed. Through properly combining wireless sensor network, emerging mobile robots and multi-antenna sensing/communication techniques, we could demonstrate superiority of developed sensing network. To be concrete, heterogeneous mobile robots including unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) are equipped with multi-model sensors and wireless transceiver antennas. Through real-time collaborative formation control, multiple mobile robots can team the best formation that can provide most accurate sensing results. Also, formatting multiple mobile robots can also construct a multiple-input multiple-output (MIMO) communication system that can provide a reliable and high performance communication network.

  5. Robot Wars: US Empire and geopolitics in the robotic age

    Science.gov (United States)

    Shaw, Ian GR

    2017-01-01

    How will the robot age transform warfare? What geopolitical futures are being imagined by the US military? This article constructs a robotic futurology to examine these crucial questions. Its central concern is how robots – driven by leaps in artificial intelligence and swarming – are rewiring the spaces and logics of US empire, warfare, and geopolitics. The article begins by building a more-than-human geopolitics to de-center the role of humans in conflict and foreground a worldly understanding of robots. The article then analyzes the idea of US empire, before speculating upon how and why robots are materializing new forms of proxy war. A three-part examination of the shifting spaces of US empire then follows: (1) Swarm Wars explores the implications of miniaturized drone swarming; (2) Roboworld investigates how robots are changing US military basing strategy and producing new topological spaces of violence; and (3) The Autogenic Battle-Site reveals how autonomous robots will produce emergent, technologically event-ful sites of security and violence – revolutionizing the battlespace. The conclusion reflects on the rise of a robotic US empire and its consequences for democracy. PMID:29081605

  6. Improving Collaborative Play between Children with Autism Spectrum Disorders and Their Siblings: The Effectiveness of a Robot-Mediated Intervention Based on Lego® Therapy

    Science.gov (United States)

    Huskens, Bibi; Palmen, Annemiek; Van der Werff, Marije; Lourens, Tino; Barakova, Emilia

    2015-01-01

    The aim of the study was to investigate the effectiveness of a brief robot-mediated intervention based on Lego® therapy on improving collaborative behaviors (i.e., interaction initiations, responses, and play together) between children with ASD and their siblings during play sessions, in a therapeutic setting. A concurrent multiple baseline design…

  7. Design and Implementation of Fire Extinguisher Robot with Robotic Arm

    Directory of Open Access Journals (Sweden)

    Memon Abdul Waris

    2018-01-01

    Full Text Available Robot is a device, which performs human task or behave like a human-being. It needs expertise skills and complex programming to design. For designing a fire fighter robot, many sensors and motors were used. User firstly send robot to an affected area, to get live image of the field with the help of mobile camera via Wi-Fi using IP camera application to laptop. If any signs of fire shown in image, user direct robot in that particular direction for confirmation. Fire sensor and temperature sensor detects and measures the reading, after confirmation robot sprinkle water on affected field. During extinguish process if any obstacle comes in between the prototype and the affected area the ultrasonic sensor detects the obstacle, in response the robotic arm moves to pick and place that obstacle to another location for clearing the path. Meanwhile if any poisonous gas is present, the gas sensor detects and indicates by making alarm.

  8. Interactions With Robots: The Truths We Reveal About Ourselves.

    Science.gov (United States)

    Broadbent, Elizabeth

    2017-01-03

    In movies, robots are often extremely humanlike. Although these robots are not yet reality, robots are currently being used in healthcare, education, and business. Robots provide benefits such as relieving loneliness and enabling communication. Engineers are trying to build robots that look and behave like humans and thus need comprehensive knowledge not only of technology but also of human cognition, emotion, and behavior. This need is driving engineers to study human behavior toward other humans and toward robots, leading to greater understanding of how humans think, feel, and behave in these contexts, including our tendencies for mindless social behaviors, anthropomorphism, uncanny feelings toward robots, and the formation of emotional attachments. However, in considering the increased use of robots, many people have concerns about deception, privacy, job loss, safety, and the loss of human relationships. Human-robot interaction is a fascinating field and one in which psychologists have much to contribute, both to the development of robots and to the study of human behavior.

  9. Using Social Robots in Health Settings: Implications of Personalization on Human-Machine Communication

    Directory of Open Access Journals (Sweden)

    Lisa Tam and Rajiv Khosla

    2016-09-01

    Full Text Available In view of the shortage of healthcare workers and a growing aging population, it is worthwhile to explore the applicability of new technologies in improving the quality of healthcare and reducing its cost. However, it remains a challenge to deploy such technologies in environments where individuals have limited knowledge about how to use them. Thus, this paper explores how the social robots designed for use in health settings in Australia have sought to overcome some of the limitations through personalization. Deployed in aged care and home-based care facilities, the social robots are person-centered, emphasizing the personalization of care with human-like attributes (e.g., human appearances to engage in reciprocal communication with users. While there have been debates over the advantages and disadvantages of personalization, this paper discusses the implications of personalization on the design of the robots for enhancing engagement, empowerment and enablement in health settings.

  10. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics

    Directory of Open Access Journals (Sweden)

    Philipp Beckerle

    2017-05-01

    Full Text Available Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  11. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D.; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions. PMID:28588473

  12. Collaborative human-machine nuclear non-proliferation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Greitzer, F.L.; Badalamente, R.V.; Stewart, T.S.

    1993-10-01

    The purpose of this paper is to report on the results of a project investigating support concepts for the information treatment needs of the International Atomic Energy Agency (IAEA, also referred to as the Agency) and its attempts to strengthen international safeguards. The aim of the research was to define user/computer interface concepts and intelligent support features that will enhance the analyst`s access to voluminous and diverse information, the ability to recognize and evaluate uncertain data, and the capability to make decisions and recommendations. The objective was to explore techniques for enhancing safeguards analysis through application of (1) more effective user-computer interface designs and (2) advanced concepts involving human/system collaboration. The approach was to identify opportunities for human/system collaboration that would capitalize on human strengths and still accommodate human limitations. This paper documents the findings and describes a concept prototype, Proliferation Analysis Support System (PASS), developed for demonstration purposes. The research complements current and future efforts to enhance the information systems used by the IAEA, but has application elsewhere, as well.

  13. Model Driven Software Development for Agricultural Robotics

    DEFF Research Database (Denmark)

    Larsen, Morten

    The design and development of agricultural robots, consists of both mechan- ical, electrical and software components. All these components must be de- signed and combined such that the overall goal of the robot is fulfilled. The design and development of these systems require collaboration between...... processing, control engineering, etc. This thesis proposes a Model-Driven Software Develop- ment based approach to model, analyse and partially generate the software implementation of a agricultural robot. Furthermore, Guidelines for mod- elling the architecture of an agricultural robots are provided......, assisting with bridging the different engineering disciplines. Timing play an important role in agricultural robotic applications, synchronisation of robot movement and implement actions is important in order to achieve precision spraying, me- chanical weeding, individual feeding, etc. Discovering...

  14. Comparability of Conflict Opportunities in Human-to-Human and Human-to-Agent Online Collaborative Problem Solving

    Science.gov (United States)

    Rosen, Yigal

    2014-01-01

    Students' performance in human-to-human and human-to-agent collaborative problem solving assessment task is investigated in this paper. A secondary data analysis of the research reported by Rosen and Tager (2013) was conducted in order to investigate the comparability of the opportunities for conflict situations in human-to-human and…

  15. Shape-estimation of human hand using polymer flex sensor and study of its application to control robot arm

    International Nuclear Information System (INIS)

    Lee, Jin Hyuck; Kim, Dae Hyun

    2015-01-01

    Ultrasonic inspection robot systems have been widely researched and developed for the real-time monitoring of structures such as power plants. However, an inspection robot that is operated in a simple pattern has limitations in its application to various structures in a plant facility because of the diverse and complicated shapes of the inspection objects. Therefore, accurate control of the robot is required to inspect complicated objects with high-precision results. This paper presents the idea that the shape and movement information of an ultrasonic inspector's hand could be profitably utilized for the accurate control of robot. In this study, a polymer flex sensor was applied to monitor the shape of a human hand. This application was designed to intuitively control an ultrasonic inspection robot. The movement and shape of the hand were estimated by applying multiple sensors. Moreover, it was successfully shown that a test robot could be intuitively controlled based on the shape of a human hand estimated using polymer flex sensors.

  16. Shape-estimation of human hand using polymer flex sensor and study of its application to control robot arm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin Hyuck; Kim, Dae Hyun [Seoul National University of Technology, Seoul (Korea, Republic of)

    2015-02-15

    Ultrasonic inspection robot systems have been widely researched and developed for the real-time monitoring of structures such as power plants. However, an inspection robot that is operated in a simple pattern has limitations in its application to various structures in a plant facility because of the diverse and complicated shapes of the inspection objects. Therefore, accurate control of the robot is required to inspect complicated objects with high-precision results. This paper presents the idea that the shape and movement information of an ultrasonic inspector's hand could be profitably utilized for the accurate control of robot. In this study, a polymer flex sensor was applied to monitor the shape of a human hand. This application was designed to intuitively control an ultrasonic inspection robot. The movement and shape of the hand were estimated by applying multiple sensors. Moreover, it was successfully shown that a test robot could be intuitively controlled based on the shape of a human hand estimated using polymer flex sensors.

  17. An Interactive Human Interface Arm Robot with the Development of Food Aid

    Directory of Open Access Journals (Sweden)

    NASHWAN D. Zaki

    2012-03-01

    Full Text Available A robotic system for the disabled who needs supports at meal is proposed. A feature of this system is that the robotic aid system can communicate with the operator using the speech recognition and speech synthesis functions. Another feature is that the robotic aid system uses an image processing, and by doing this the system can recognize the environmental situations of the dishes, cups and so on. Due to this image processing function, the operator does not need to specify the position and the posture of the dishes and target objects. Furthermore, combination communication between speech and image processing will enables a friendly man-machine to communicate with each other, since speech and visual information are essential in the human communication.

  18. Robots in human biomechanics--a study on ankle push-off in walking.

    Science.gov (United States)

    Renjewski, Daniel; Seyfarth, André

    2012-09-01

    In biomechanics, explanatory template models are used to identify the basic mechanisms of human locomotion. However, model predictions often lack verification in a realistic environment. We present a method that uses template model mechanics as a blueprint for a bipedal robot and a corresponding computer simulation. The hypotheses derived from template model studies concerning the function of heel-off in walking are analysed and discrepancies between the template model and its real-world anchor are pointed out. Neither extending the ground clearance of the swinging leg nor an impact reduction at touch-down as an effect of heel lifting was supported by the experiments. To confirm the relevance of the experimental findings, a comparison of robot data to human walking data is discussed and we speculate on an alternative explanation of heel-off in human walking, i.e. that the push-off powers the following leg swing.

  19. Robots in human biomechanics—a study on ankle push-off in walking

    International Nuclear Information System (INIS)

    Renjewski, Daniel; Seyfarth, André

    2012-01-01

    In biomechanics, explanatory template models are used to identify the basic mechanisms of human locomotion. However, model predictions often lack verification in a realistic environment. We present a method that uses template model mechanics as a blueprint for a bipedal robot and a corresponding computer simulation. The hypotheses derived from template model studies concerning the function of heel-off in walking are analysed and discrepancies between the template model and its real-world anchor are pointed out. Neither extending the ground clearance of the swinging leg nor an impact reduction at touch-down as an effect of heel lifting was supported by the experiments. To confirm the relevance of the experimental findings, a comparison of robot data to human walking data is discussed and we speculate on an alternative explanation of heel-off in human walking, i.e. that the push-off powers the following leg swing. (paper)

  20. Collaboration between Supported Employment and Human Resource Services: Strategies for Success

    Science.gov (United States)

    Post, Michal; Campbell, Camille; Heinz, Tom; Kotsonas, Lori; Montgomery, Joyce; Storey, Keith

    2010-01-01

    The article presents the benefits of successful collaboration between supported employment agencies and human resource managers when working together to secure employment for individuals with disabilities. Two case studies are presented: one involving a successful collaboration with county human resource managers in negotiating a change in the…

  1. Affect in Human-Robot Interaction

    Science.gov (United States)

    2014-01-01

    Werry, I., Rae, J., Dickerson, P., Stribling, P., & Ogden, B. (2002). Robotic Playmates: Analysing Interactive Competencies of Children with Autism ...WE-4RII. IEEE International Conference on Intelligent Robots and Systems, Edmonton, Canada. 35. Moravec, H. (1988). Mind Children : The Future of...and if so when and where? • What approaches, theories , representations, and experimental methods inform affective HRI research? Report Documentation

  2. ISS Robotic Student Programming

    Science.gov (United States)

    Barlow, J.; Benavides, J.; Hanson, R.; Cortez, J.; Le Vasseur, D.; Soloway, D.; Oyadomari, K.

    2016-01-01

    The SPHERES facility is a set of three free-flying satellites launched in 2006. In addition to scientists and engineering, middle- and high-school students program the SPHERES during the annual Zero Robotics programming competition. Zero Robotics conducts virtual competitions via simulator and on SPHERES aboard the ISS, with students doing the programming. A web interface allows teams to submit code, receive results, collaborate, and compete in simulator-based initial rounds and semi-final rounds. The final round of each competition is conducted with SPHERES aboard the ISS. At the end of 2017 a new robotic platform called Astrobee will launch, providing new game elements and new ground support for even more student interaction.

  3. Social Robotic Experience and Media Communication Practices: An Exploration on the Emotional and Ritualized Human-technology-relations

    Directory of Open Access Journals (Sweden)

    Christine Linke

    2013-01-01

    Full Text Available This article approaches the subject of social robots by focusing on the emotional relations people establish with media and information and communication technology (ICTs in their everyday life. It examines human-technology-relation from a social studies point of view, seeking to raise questions that enable us to make a connection between the research on human relationships and the topic of human-technology relation, especially human-humanoid-relation. In order to explore the human-technology-relations, theoretical ideas of a mediatization of communication and of a ritual interaction order are applied. Ritual theory is particularly used to enable a focus on emotion as a significant dimension in analyzing social technologies. This explorative article refers to empirical findings regarding media communication practices in close relationships. It argues that following the developed approach regarding mediatized and ritualized relational practices, useful insights for a conceptualization of the human-social robot relation can be achieved. The article concludes with remarks regarding the challenge of an empirical approach to human-social robot-relations.

  4. Cyberbotics Ltd. Webots™: Professional Mobile Robot Simulation

    Directory of Open Access Journals (Sweden)

    Olivier Michel

    2004-03-01

    Full Text Available Cyberbotics Ltd. develops Webots™, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots™ lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots™ has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.

  5. Distributed Robotics Education

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2011-01-01

    Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept of a distribu......Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept...... to be changed, related to multirobot control and human-robot interaction control from virtual to physical representation. The proposed system is valuable for bringing a vast number of issues into education – such as parallel programming, distribution, communication protocols, master dependency, connectivity...

  6. Natural Tasking of Robots Based on Human Interaction Cues (CD-ROM)

    National Research Council Canada - National Science Library

    Brooks, Rodney A

    2005-01-01

    ...: 1 CD-ROM; 4 3/4 in.; 207 MB. ABSTRACT: We proposed developing the perceptual and intellectual abilities of robots so that in the field, war-fighters can interact with them in the same natural ways as they do with their human cohorts...

  7. From sex robots to love robots: is mutual love with a robot possible?

    NARCIS (Netherlands)

    Nyholm, S.R.; Frank, L.E.; Danaher, J.; McArthur, N.

    2017-01-01

    Some critics of sex-robots worry that their use might spread objectifying attitudes about sex, and common sense places a higher value on sex within love-relationships than on casual sex. If there could be mutual love between humans and sex-robots, this could help to ease the worries about

  8. Molecular Robots Obeying Asimov's Three Laws of Robotics.

    Science.gov (United States)

    Kaminka, Gal A; Spokoini-Stern, Rachel; Amir, Yaniv; Agmon, Noa; Bachelet, Ido

    2017-01-01

    Asimov's three laws of robotics, which were shaped in the literary work of Isaac Asimov (1920-1992) and others, define a crucial code of behavior that fictional autonomous robots must obey as a condition for their integration into human society. While, general implementation of these laws in robots is widely considered impractical, limited-scope versions have been demonstrated and have proven useful in spurring scientific debate on aspects of safety and autonomy in robots and intelligent systems. In this work, we use Asimov's laws to examine these notions in molecular robots fabricated from DNA origami. We successfully programmed these robots to obey, by means of interactions between individual robots in a large population, an appropriately scoped variant of Asimov's laws, and even emulate the key scenario from Asimov's story "Runaround," in which a fictional robot gets into trouble despite adhering to the laws. Our findings show that abstract, complex notions can be encoded and implemented at the molecular scale, when we understand robots on this scale on the basis of their interactions.

  9. A Kinect-Based Gesture Recognition Approach for a Natural Human Robot Interface

    Directory of Open Access Journals (Sweden)

    Grazia Cicirelli

    2015-03-01

    Full Text Available In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN classifiers are trained to recognize the different gestures. This work deals with different challenging tasks, such as the real-time implementation of a gesture recognition system and the temporal resolution of gestures. The HRI interface developed in this work includes three Kinect cameras placed at different locations in an indoor environment and an autonomous mobile robot that can be remotely controlled by one operator standing in front of one of the Kinects. Moreover, the system is supplied with a people re-identification module which guarantees that only one person at a time has control of the robot. The system's performance is first validated offline, and then online experiments are carried out, proving the real-time operation of the system as required by a HRI interface.

  10. Anthropomorphism in Human–Robot Co-evolution

    Directory of Open Access Journals (Sweden)

    Luisa Damiano

    2018-03-01

    Full Text Available Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.

  11. Anthropomorphism in Human–Robot Co-evolution

    Science.gov (United States)

    Damiano, Luisa; Dumouchel, Paul

    2018-01-01

    Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth. PMID:29632507

  12. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    Science.gov (United States)

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This

  13. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jantsje H. Pasma

    2018-03-01

    Full Text Available The Independent Channel (IC model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a

  14. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human--Robot Interaction

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2016-07-01

    Full Text Available To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  15. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    Science.gov (United States)

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  16. Language for action: Motor resonance during the processing of human and robotic voices.

    Science.gov (United States)

    Di Cesare, G; Errante, A; Marchi, M; Cuccio, V

    2017-11-01

    In this fMRI study we evaluated whether the auditory processing of action verbs pronounced by a human or a robotic voice in the imperative mood differently modulates the activation of the mirror neuron system (MNs). The study produced three results. First, the activation pattern found during listening to action verbs was very similar in both the robot and human conditions. Second, the processing of action verbs compared to abstract verbs determined the activation of the fronto-parietal circuit classically involved during the action goal understanding. Third, and most importantly, listening to action verbs compared to abstract verbs produced activation of the anterior part of the supramarginal gyrus (aSMG) regardless of the condition (human and robot) and in the absence of any object name. The supramarginal gyrus is a region considered to underpin hand-object interaction and associated to the processing of affordances. These results suggest that listening to action verbs may trigger the recruitment of motor representations characterizing affordances and action execution, coherently with the predictive nature of motor simulation that not only allows us to re-enact motor knowledge to understand others' actions but also prepares us for the actions we might need to carry out. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems

    Science.gov (United States)

    Ham, MyungJoo

    2009-01-01

    We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…

  18. Research the Gait Characteristics of Human Walking Based on a Robot Model and Experiment

    Science.gov (United States)

    He, H. J.; Zhang, D. N.; Yin, Z. W.; Shi, J. H.

    2017-02-01

    In order to research the gait characteristics of human walking in different walking ways, a robot model with a single degree of freedom is put up in this paper. The system control models of the robot are established through Matlab/Simulink toolbox. The gait characteristics of straight, uphill, turning, up the stairs, down the stairs up and down areanalyzed by the system control models. To verify the correctness of the theoretical analysis, an experiment was carried out. The comparison between theoretical results and experimental results shows that theoretical results are better agreement with the experimental ones. Analyze the reasons leading to amplitude error and phase error and give the improved methods. The robot model and experimental ways can provide foundation to further research the various gait characteristics of the exoskeleton robot.

  19. Human Robotic Systems (HRS): Robotic ISRU Acquisition Element

    Data.gov (United States)

    National Aeronautics and Space Administration — During 2014, the Robotic ISRU Resource Acquisition project element will develop two technologies:Exploration Ground Data Systems (xGDS)Sample Acquisition on...

  20. Robotic surgical education: a collaborative approach to training postgraduate urologists and endourology fellows.

    Science.gov (United States)

    Mirheydar, Hossein; Jones, Marklyn; Koeneman, Kenneth S; Sweet, Robert M

    2009-01-01

    Currently, robotic training for inexperienced, practicing surgeons is primarily done vis-à-vis industry and/or society-sponsored day or weekend courses, with limited proctorship opportunities. The objective of this study was to assess the impact of an extended-proctorship program at up to 32 months of follow-up. An extended-proctorship program for robotic-assisted laparoscopic radical prostatectomy was established at our institution. The curriculum consisted of 3 phases: (1) completing an Intuitive Surgical 2-day robotic training course with company representatives; (2) serving as assistant to a trained proctor on 5 to 6 cases; and (3) performing proctored cases up to 1 year until confidence was achieved. Participants were surveyed and asked to evaluate on a 5-point Likert scale their operative experience in robotics and satisfaction regarding their training. Nine of 9 participants are currently performing robotic-assisted laparoscopic radical prostatectomy (RALP) independently. Graduates of our program have performed 477 RALP cases. The mean number of cases performed within phase 3 was 20.1 (range, 5 to 40) prior to independent practice. The program received a rating of 4.2/5 for effectiveness in teaching robotic surgery skills. Our robotic program, with extended proctoring, has led to an outstanding take-rate for disseminating robotic skills in a metropolitan community.