The Transfer of Human Trust in Robot Capabilities across Tasks
@article{Soh2018TheTO, title={The Transfer of Human Trust in Robot Capabilities across Tasks}, author={Harold Soh and Pan Shu and Min Chen and David Hsu}, journal={ArXiv}, year={2018}, volume={abs/1807.01866} }
Trust is crucial in shaping human interactions with one another and with robots. This work investigates how human trust in robot capabilities transfers across tasks. We present a human-subjects study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. Our findings lead to a functional view of trust and two novel predictive models---a recurrent neural network architecture and a…
10 Citations
Multi-task trust transfer for human–robot interaction
- Computer ScienceInt. J. Robotics Res.
- 2020
A human-subject study of two distinct task domains of a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers suggests that task-dependent functional trust models capture human trust in robot capabilities more accurately and trust transfer across tasks can be inferred to a good degree.
Trust Dynamics and Transfer across Human-Robot Interaction Tasks: Bayesian and Neural Computational Models
- Computer ScienceIJCAI
- 2019
It is found that human trust changes and transfers across tasks in a structured manner based on perceived task characteristics, and task-dependent functional trust models capture human trust in robot capabilities more accurately, and trust transfer across tasks can be inferred to a good degree.
Trust-Aware Decision Making for Human-Robot Collaboration
- Computer ScienceACM Transactions on Human-Robot Interaction
- 2020
A computational model that integrates trust into robot decision making with human trust as a latent variable is introduced and shows that the trust-POMDP calibrates trust to improve human-robot team performance over the long term.
Robot Capability and Intention in Trust-Based Decisions Across Tasks
- Business2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
- 2019
Results from a human-subject study designed to explore two facets of human mental models of robots—inferred capability and intention—and their relationship to overall trust and eventual decisions suggest that calibrating overall trust alone is insufficient.
Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams
- BusinessInt. J. Soc. Robotics
- 2020
A novel integrative model is presented that takes a longitudinal perspective on trust development and calibration in human–robot teams and introduces the introduction of the concept relationship equity.
Robot Errors in Proximate HRI
- Psychology
- 2020
Advancements within human–robot interaction generate increasing opportunities for proximate, goal-directed joint action (GDJA). However, robot errors are common and researchers must determine how to…
Getting to Know One Another: Calibrating Intent, Capabilities and Trust for Human-Robot Collaboration
- Computer Science2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2020
This work addresses the problem of calibrating intention and capabilities in human-robot collaboration by adopting a decision-theoretic approach and proposing the TICC-POMDP for modeling this setting, with an associated online solver.
Reinforcement Learning with Fairness Constraints for Resource Distribution in Human-Robot Teams
- Computer ScienceArXiv
- 2019
This work introduces a multi-armed bandit algorithm with fairness constraints, where a robot distributes resources to human teammates of different skill levels, and defines fairness as a constraint on the minimum rate that each human teammate is selected throughout the task.
Robot Errors in Proximate HRI
- Materials Science
- 2020
This research presents a novel probabilistic procedure called a “shots fired” approach (RSP) that automates the very labor-intensive and therefore time-heavy and expensive and expensive process of integrating a GDJA system into a robot.
References
SHOWING 1-10 OF 41 REFERENCES
Planning with Trust for Human-Robot Collaboration
- Computer Science2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
- 2018
The trust-POMDP model provides a principled approach for the robot to infer the trust of a human teammate through interaction, reason about the effect of its own actions on human behaviors, and choose actions that maximize team performance over the long term.
Towards Modeling Real-Time Trust in Asymmetric Human-Robot Collaborations
- Computer ScienceISRR
- 2013
This work proposes an operational formulation of human–robot trust on a short interaction time scale, which is tailored to a practical tele-robotics setting, and construct and optimize a predictive model of users’ trust responses to discrete events, which provides both insights on this fundamental aspect of real-time human–machine interaction.
The Role of Trust in Decision-Making for Human Robot Collaboration
- Computer Science
- 2017
This work closes the loop between modeling trust and choosing robot actions to maximize team performance, and enables the robot to both infer and influence the collaborating human’s level of trust.
OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations
- Computer Science2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
- 2015
Evaluated results highlight OPTIMo’s advances in both prediction accuracy and responsiveness over several existing trust models, making possible the development of autonomous robots that can adapt their behaviors dynamically, to actively seek greater trust and greater efficiency within future human-robot collaborations.
Trust calibration within a human-robot team: Comparing automatically generated explanations
- Computer Science2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
- 2016
This work leverages existing agent algorithms to provide a domain-independent mechanism for robots to automatically generate explanations, and demonstrates that the added explanation capability led to improvement in transparency, trust, and team performance.
Human-Robot Mutual Adaptation in Shared Autonomy
- Computer Science2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI
- 2017
This work proposes and shows in a human subject experiment that the proposed mutual adaptation formalism improves human-robot team performance, while retaining a high level of user trust in the robot, compared to the common approach of having the robot strictly following participants' preference.
Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration
- Computer Science2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI
- 2017
It is demonstrated through a human subject experiment that the proposed model significantly improves human-robot team performance, compared to policies that assume complete adaptation of the human to the robot.
Human Trust in Robot Capabilities across Tasks
- Computer ScienceHRI
- 2018
It is found that human trust generalization is influenced by perceived task similarity, difficulty, and robot performance.
Human-robot interaction: Developing trust in robots
- Business2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
- 2012
Progress to date relating to the development of a comprehensive human-robot trust model is described based on the ongoing program of research.
Impact of robot failures and feedback on real-time trust
- Business2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
- 2013
An experiment showed trust loss due to early reliability drops is masked in traditional post-run measures, trust demonstrates inertia, and feedback alters allocation strategies independent of trust.