"Dave...I can assure you ...that it's going to be all right ..." A Definition, Case for, and Survey of Algorithmic Assurances in Human-Autonomy Trust Relationships

@article{Israelsen2019DaveICA,
  title={"Dave...I can assure you ...that it's going to be all right ..." A Definition, Case for, and Survey of Algorithmic Assurances in Human-Autonomy Trust Relationships},
  author={Brett W. Israelsen and Nisar R. Ahmed},
  journal={ACM Comput. Surv.},
  year={2019},
  volume={51},
  pages={113:1-113:37}
}
People who design, use, and are affected by autonomous artificially intelligent agents want to be able to trust such agents—that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc and have not been formally related to each other or to formal trust… CONTINUE READING
Tweets
This paper has been referenced on Twitter 39 times. VIEW TWEETS

References

Publications referenced by this paper.
SHOWING 1-10 OF 16 REFERENCES

The mythos of model interpretability

  • Zachary C. Lipton
  • (June )
  • 2016
Highly Influential
3 Excerpts

A comprehensive survey on safe reinforcement learning

  • J. Garcıa, F. Fernández
  • J. Mach. Learn. Res. 16,
  • 2015
Highly Influential
3 Excerpts

Similar Papers

Loading similar papers…