The responsibility gap: Ascribing responsibility for the actions of learning automata

@article{Matthias2004TheRG,
  title={The responsibility gap: Ascribing responsibility for the actions of learning automata},
  author={Andreas Matthias},
  journal={Ethics and Information Technology},
  year={2004},
  volume={6},
  pages={175-183}
}
  • A. Matthias
  • Published 1 September 2004
  • Business
  • Ethics and Information Technology
Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this… 

Figures from this paper

Responsibility assignment won’t solve the moral issues of artificial intelligence
Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using
Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or
From Coder to Creator: Responsibility Issues in Intelligent Artifact Design
TLDR
This work surveys the techniques of artificial intelligence engineering, showing that there has been a shift in the role of the programmer of such machines from a coder to a mere creator of software organisms which evolve and develop by themselves, and proposes five criteria for purely legal responsibility.
Principles for the future development of artificial agents
A survey of popular, technical and scholarly literature suggests that autonomous artificial agents will populate the future. Although some visions may seem fanciful, autonomous artificial agents are
Liability for Autonomous and Artificially Intelligent Robots
TLDR
Product liability and negligence tort law is reviewed which may be used to allocate liability for robots that damage property or cause injury and a discussion of different approaches to allocating liability in an age of increasingly intelligent and autonomous robots.
From Responsibility to Reason-Giving Explainable Artificial Intelligence
We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for
Responsible AI and Its Stakeholders
TLDR
This work discusses three notions of responsibility (i.e., blameworthiness, accountability, and liability) for all stakeholders, including AI, and suggests the roles of jurisdiction and the general public in this matter.
Instrumental Robots
TLDR
This paper concedes that current AI will possess supervised agency, but argues that it is nevertheless wrong to think of the relevant human-AI interactions as a form of collaborative agency and, hence, that responsibility in cases of collaborativeagency is not the right place to look for the responsibility-grounding relation in human- AI interactions.
Strawson's take on responsibility applied to AI
This paper investigates the attribution of responsibility to artificial intelligent systems (AI). It argues that traditional approaches to the subject are insufficient because they encounter some of
Statistically responsible artificial intelligences
TLDR
It is concluded that weak AI is never morally responsible, while a strong AI with the right emotional capacities may be morally responsible.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 21 REFERENCES
Applied Artificial Intelligence: A Sourcebook
TLDR
This book documents the latest advances in knowledge-based systems design and development and places the information in context, offering an historical perspective on the rise of artificial intelligence.
Evolving networks: using the genetic algorithm with connectionist learning
TLDR
A survey of recent work that combines Holland's Genetic Algorithm with con-nectionist techniques and delineates some of the basic design problems these hybrids share concludes that the GA's global sampling characteristics compliment connectionist local search techniques well, leading to eecient and reliable hybrids.
Adaptation in natural and artificial systems
TLDR
Names of founding work in the area of Adaptation and modiication, which aims to mimic biological optimization, and some (Non-GA) branches of AI.
LEARNING ROBOT BEHAVIORS USING GENETIC ALGORITHMS
TLDR
The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity, and the expectation is that behaviors learned in these simulations will be useful in real-world environments.
The Misguided Marriage of Responsibility and Autonomy
Much of the literature devoted to the topics of agent autonomy and agent responsibility suggests strong conceptual overlaps between the two, although few explore these overlaps explicitly. Beliefs of
Evolutionary Algorithms for Reinforcement Learning
TLDR
Strengths and weaknesses of the evolutionary approach to reinforcement learning are presented, along with a survey of representative applications.
Fischer and Ravizza on Moral Responsibility and History@@@Responsibility and Control: A Theory of Moral Responsibility
There is much of significance in John Fischer and Mark Ravizza's thoughtful book. I will, however, focus primarily on their interesting and suggestive claim that "moral responsibility is an
Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation.
TLDR
SAMUEL, a learning system based on genetic algorithms, is used to learn high-performance reactive strategies for navigation and collision avoidance that also achieve real-time performance.
Learning to Fly: Modeling Human Control Strategies in an Aerial Vehicle
Much work has been done in recent years to abstract computational models of human controlcomputational models of human control strategy (HCS) that are capable of accurately emulating dynamic human
Unifying Class-Based Representation Formalisms
TLDR
It is argued that, by virtue of the high expressive power and of the associated reasoning capabilities on both unrestricted and finite models, the proposed logic provides a common core for class-based representation formalisms.
...
1
2
3
...