The Transfer of Human Trust in Robot Capabilities across Tasks

@article{Soh2018TheTO,
  title={The Transfer of Human Trust in Robot Capabilities across Tasks},
  author={Harold Soh and Pan Shu and Min Chen and David Hsu},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.01866}
}
Trust is crucial in shaping human interactions with one another and with robots. This work investigates how human trust in robot capabilities transfers across tasks. We present a human-subjects study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. Our findings lead to a functional view of trust and two novel predictive models---a recurrent neural network architecture and a… 

Figures and Tables from this paper

Multi-task trust transfer for human–robot interaction
TLDR
A human-subject study of two distinct task domains of a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers suggests that task-dependent functional trust models capture human trust in robot capabilities more accurately and trust transfer across tasks can be inferred to a good degree.
Trust Dynamics and Transfer across Human-Robot Interaction Tasks: Bayesian and Neural Computational Models
TLDR
It is found that human trust changes and transfers across tasks in a structured manner based on perceived task characteristics, and task-dependent functional trust models capture human trust in robot capabilities more accurately, and trust transfer across tasks can be inferred to a good degree.
Trust-Aware Decision Making for Human-Robot Collaboration
TLDR
A computational model that integrates trust into robot decision making with human trust as a latent variable is introduced and shows that the trust-POMDP calibrates trust to improve human-robot team performance over the long term.
Robot Capability and Intention in Trust-Based Decisions Across Tasks
TLDR
Results from a human-subject study designed to explore two facets of human mental models of robots—inferred capability and intention—and their relationship to overall trust and eventual decisions suggest that calibrating overall trust alone is insufficient.
Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams
TLDR
A novel integrative model is presented that takes a longitudinal perspective on trust development and calibration in human–robot teams and introduces the introduction of the concept relationship equity.
Robot Errors in Proximate HRI
Advancements within human–robot interaction generate increasing opportunities for proximate, goal-directed joint action (GDJA). However, robot errors are common and researchers must determine how to
Getting to Know One Another: Calibrating Intent, Capabilities and Trust for Human-Robot Collaboration
TLDR
This work addresses the problem of calibrating intention and capabilities in human-robot collaboration by adopting a decision-theoretic approach and proposing the TICC-POMDP for modeling this setting, with an associated online solver.
Reinforcement Learning with Fairness Constraints for Resource Distribution in Human-Robot Teams
TLDR
This work introduces a multi-armed bandit algorithm with fairness constraints, where a robot distributes resources to human teammates of different skill levels, and defines fairness as a constraint on the minimum rate that each human teammate is selected throughout the task.
Robot Errors in Proximate HRI
TLDR
This research presents a novel probabilistic procedure called a “shots fired” approach (RSP) that automates the very labor-intensive and therefore time-heavy and expensive and expensive process of integrating a GDJA system into a robot.

References

SHOWING 1-10 OF 41 REFERENCES
Planning with Trust for Human-Robot Collaboration
TLDR
The trust-POMDP model provides a principled approach for the robot to infer the trust of a human teammate through interaction, reason about the effect of its own actions on human behaviors, and choose actions that maximize team performance over the long term.
Towards Modeling Real-Time Trust in Asymmetric Human-Robot Collaborations
TLDR
This work proposes an operational formulation of human–robot trust on a short interaction time scale, which is tailored to a practical tele-robotics setting, and construct and optimize a predictive model of users’ trust responses to discrete events, which provides both insights on this fundamental aspect of real-time human–machine interaction.
The Role of Trust in Decision-Making for Human Robot Collaboration
TLDR
This work closes the loop between modeling trust and choosing robot actions to maximize team performance, and enables the robot to both infer and influence the collaborating human’s level of trust.
OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations
  • Anqi Xu, G. Dudek
  • Computer Science
    2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
  • 2015
TLDR
Evaluated results highlight OPTIMo’s advances in both prediction accuracy and responsiveness over several existing trust models, making possible the development of autonomous robots that can adapt their behaviors dynamically, to actively seek greater trust and greater efficiency within future human-robot collaborations.
Trust calibration within a human-robot team: Comparing automatically generated explanations
TLDR
This work leverages existing agent algorithms to provide a domain-independent mechanism for robots to automatically generate explanations, and demonstrates that the added explanation capability led to improvement in transparency, trust, and team performance.
Human-Robot Mutual Adaptation in Shared Autonomy
TLDR
This work proposes and shows in a human subject experiment that the proposed mutual adaptation formalism improves human-robot team performance, while retaining a high level of user trust in the robot, compared to the common approach of having the robot strictly following participants' preference.
Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration
TLDR
It is demonstrated through a human subject experiment that the proposed model significantly improves human-robot team performance, compared to policies that assume complete adaptation of the human to the robot.
Human Trust in Robot Capabilities across Tasks
TLDR
It is found that human trust generalization is influenced by perceived task similarity, difficulty, and robot performance.
Human-robot interaction: Developing trust in robots
TLDR
Progress to date relating to the development of a comprehensive human-robot trust model is described based on the ongoing program of research.
Impact of robot failures and feedback on real-time trust
TLDR
An experiment showed trust loss due to early reliability drops is masked in traditional post-run measures, trust demonstrates inertia, and feedback alters allocation strategies independent of trust.
...
...