Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare

  title={Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare},
  author={R. N. Procter and Peter Tolmie and Mark Rouncefield},
  journal={ACM Transactions on Computer-Human Interaction},
The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this paper, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human-Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational… 

Figures from this paper



Collaboration and Trust in Healthcare Innovation: The eDiaMoND Case Study

An investigation into requirements for collaboration in e-Science in the context of eDiaMoND, a Grid-enabled prototype system intended in part to support breast cancer screening, finds the importance of accountability and visibility of work for trust and for the various forms of 'practical ethical action' in which clinicians are seen to routinely engage in this setting.

Trust, Professional Vision and Diagnostic Work

Empirical materials from the ongoing research into forms of everyday detection and diagnosis work in healthcare settings are considered, and how these relate to issues of trust; trust in people, in technology, processes and in data.

Clinical AI: opacity, accountability, responsibility and liability

This review found that there are multiple concerns about opacity, accountability, responsibility and liability when considering the stakeholders of technologists and clinicians in the creation and use of AIS in clinical decision making.

Expanding Explainability: Towards Social Transparency in AI systems

This work suggested constitutive design elements of ST and developed a conceptual framework to unpack ST’s effect and implications at the technical, decision-making, and organizational level, showing how ST can potentially calibrate trust in AI, improve decision- making, facilitate organizational collective actions, and cultivate holistic explainability.

Patient apprehensions about the use of artificial intelligence in healthcare

The results indicate that patients have multiple concerns, including concerns related to the safety of AI, threats to patient choice, potential increases in healthcare costs, data-source bias, and data security, and that patient acceptance of AI is contingent on mitigating these possible harms.

Practicalities of Participation: Stakeholder Involvement in an Electronic Patient Records Project

This chapter considers some of the everyday practicalities of achieving participation and managing user-designer relations (UDRs) when delivering an electronic health record (EHR) project within an

Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes

The design and field evaluation of a radically new form of DST that automatically generates slides for clinicians' decision meetings with subtly embedded machine prognostics took inspiration from the notion of Unremarkable Computing, that by augmenting the users' routines technology/AI can have significant importance for the users yet remain unobtrusive.

Understanding artificial intelligence ethics and safety

This guide identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them and builds out a vision of human-centred and context-sensitive implementation that gives a central role to communication, evidence-based reasoning, situational awareness, and moral justifiability.

Bridging the Gap Between Ethics and Practice

15 recommendations are intended to increase the reliability, safety, and trustworthiness of HCAI systems: reliable systems based on sound software engineering practices, safety culture through business management strategies, and trustworthy certification by independent oversight.