Primer on an ethics of AI-based decision support systems in the clinic

  title={Primer on an ethics of AI-based decision support systems in the clinic},
  author={Matthias Braun and Patrik Hummel and Susanne Beck and Peter Dabrock},
  journal={Journal of Medical Ethics},
  pages={e3 - e3}
Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as… 
Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.
This proof-of-concept study shows how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions.
Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems
Good decision-making is a complex endeavor, and particularly so in a health context. The possibilities for day-to-day clinical practice opened up by AI-driven clinical decision support systems
The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory
This article aims to add to the ethical discussion by using professionalisation theory as an analytical lens for investigating how medical action at the micro level and the physician–patient relationship might be affected by the employment of ML_CDSS.
Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts
A ‘rule of disagreement’ is provided that proposes to use AI as much as possible, but retain the ability to use human second opinions to resolve disagreements between AI and physician-in-charge.
When Doctors and AI Interact: on Human Responsibility for Artificial Risks
This work discusses relational criteria of judgment in support of the attribution of responsibility to humans when adverse events are caused or induced by errors in AI systems and analyzes human responsibility in the presence of AI systems in terms of meaningful control and due diligence.
Teasing out Artificial Intelligence in Medicine: An Ethical Critique of Artificial Intelligence and Machine Learning in Medicine
  • M. Arnold
  • Medicine
    Journal of Bioethical Inquiry
  • 2021
This paper proposes that physicians should neither uncritically accept nor unreasonably resist developments in AI but must actively engage and contribute to the discourse, since AI will affect their roles and the nature of their work.
“I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts
This paper reconstructs different causes of conflicts between physicians and their AI-based tools and delineates normative conditions for “meaningful disagreements”, which incorporate the potential of DSS to take on more tasks and outline how the moral responsibility of a physician can be preserved in an increasingly automated clinical work environment.
Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations
The ethical landscape of AI in digital health is portrayed including a snapshot guiding future development by outlining an overview of addressed ethical principles and intensity of studies including correlations.
Randomised controlled trials in medical AI: ethical considerations
This paper sets out to develop a systematic account of the ethics of AI RCTs by focusing on the moral principles of clinical equipoise, informed consent and fairness, to animate further debate on the (research) ethics of medical AI.


Computer knows best? The need for value-flexibility in medical AI
This paper argues that use of this type of system creates both important risks and significant opportunities for promoting shared decision making, and if value judgements are fixed and covert in AI systems, then the authors risk a shift back to more paternalistic medical care.
Shared Decision Making: A Model for Clinical Practice
A model of how to do shared decision making that is based on choice, option and decision talk is proposed that is practical, easy to remember, and can act as a guide to skill development.
Principles alone cannot guarantee ethical AI
Significant differences exist between medical practice and AI development that suggest a principled approach may not work in the case of AI, and Brent Mittelstadt highlights these differences.
Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies
Artificial intelligence technology (or AI) has developed rapidly during the past decade, and the effects of the AI revolution are already being keenly felt in many sectors of the economy. A growing
The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care
A reinforcement learning agent, the AI Clinician, can assist physicians by providing individualized and clinically interpretable treatment decisions to improve patient outcomes by extracting implicit knowledge from an amount of patient data that exceeds by many-fold the life-time experience of human clinicians.
Neonatal intensive care decision support systems using artificial intelligence techniques: a systematic review
The different technologies used in neonatal decision support systems (DSS), including cognitive analysis, artificial neural networks, data mining techniques, multi-agent systems, and multi- agent systems are reviewed and highlighted their role in patient diagnosis, prognosis, monitoring, and healthcare management.
Framing the challenges of artificial intelligence in medicine
On a clear January morning in Florida, a Tesla enthusiast and network entrepreneur was driving his new Tesla Model S on US Highway 27A, returning from a family trip when it crashed into the trailer of a truck turning left, and the driver was killed, leaving his family and his high-tech business behind.
Meaningful Human Control over Autonomous Systems: A Philosophical Account
The foundation of a philosophical account of meaningful human control is laid, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility, in the form of design requirements for non-military autonomous systems, for instance, self-driving cars.
Black-Box Medicine
This Article is the first to label the phenomenon of black-box medicine, a version of personalized medicine in which researchers use sophisticated algorithms to examine huge troves of health data, finding complex, implicit relationships and making individualized assessments for patients.