Artificial moral agents are infeasible with foreseeable technologies

@article{Hew2014ArtificialMA,
  title={Artificial moral agents are infeasible with foreseeable technologies},
  author={Patrick Chisan Hew},
  journal={Ethics and Information Technology},
  year={2014},
  volume={16},
  pages={197-206}
}
  • P. Hew
  • Published 1 September 2014
  • Medicine
  • Ethics and Information Technology
For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility. 
Ethics, Human Rights, the Intelligent Robot, and its Subsystem for Moral Beliefs
TLDR
It is shown how these properties of human beings can be interpreted in terms of a prototypical architecture for an intelligent robot, and how the robot can be provided with several aspects of ethical capability in this way.
Artificial Intelligence in Weapons: The Moral Imperative for Minimally-Just Autonomy
F military power to be lawful and morally just, future autonomous artificial intelligence (AI) systems must not commit humanitarian errors or acts of fratricide. To achieve this, a preventative form
AI IN WEAPONS : THE MORAL IMPERATIVE FOR MINIMALLY-JUST AUTONOMY Prof .
For land power to be lawful and morally just, future autonomous systems must not commit humanitarian errors or acts of fratricide. To achieve this, we distinguish a novel preventative form of
Ethical Decision Making in Robots: Autonomy, Trust and Responsibility - Autonomy Trust and Responsibility
TLDR
It is argued that for people to trust autonomous robots to be able to explain the reasons for their decisions, they need to know which ethical principles they are applying and that their application is deterministic and predictable.
Philosophical Specification of Empathetic Ethical Artificial Intelligence
TLDR
Using enactivism, semiotics, perceptual symbol systems and symbol emergence, an agent is specified that learns not just arbitrary relations between signs but their meaning in terms of the perceptual states of its sensorimotor system, and has malleable intent.
AI Ethics, Security and Privacy
TLDR
Major aspects and concerns related to AI ethics include: robots ethics, robot rights, moral agents, opaqueness of AI systems, privacy & AI monitoring, automation and employment, prejudices inAI systems, responsibility for autonomous machines, and international AI ethic policy.
Moral responsibility in mixed human-machine teams
Many morally significant or impactful actions are either performed by a group, or occur at the end of a long sequence of actions, each of which contributed to the morally relevant outcome. In such
Can we program or train robots to be good?
  • A. Sharkey
  • Philosophy
    Ethics and Information Technology
  • 2017
TLDR
This paper takes a realistic look at recent attempts to program and to train robots to develop some form of moral competence, and argues that the second is the more responsible choice.
AI and states of mind from a legal perspective: from intentional states to guilt
TLDR
This paper will try to identify the concepts of guilt and negligence and its various different levels, both in civil and criminal domains, and to enquire if there is any possibility of developing a system of knowledge representation and reasoning, under a formal framework based on Logic Programming, allowing the evaluation and representation of the possible levels of guilt in the actions of autonomous software agents.
Incorporating Ethics into Artificial Intelligence (with Oren Etzioni)
This chapter reviews the reasons scholars hold that driverless cars and many other AI-equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows
...
1
2
3
4
...

References

SHOWING 1-10 OF 87 REFERENCES
Prolegomena to any future artificial moral agent
TLDR
The ethical disputes are surveyed, the possibility of a ‘moral Turing Test’ is considered and the computational difficulties accompanying the different types of approach are assessed.
Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture part I: Motivation and philosophy
  • R. Arkin
  • Philosophy
    2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI)
  • 2008
This paper provides the motivation and philosophy underlying the design of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic
Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches
A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI
The ethics of designing artificial agents
TLDR
It is demonstrated that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one, and supports the first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior.
Ethics and consciousness in artificial agents
TLDR
The Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern, is considered.
Implementing moral decision making faculties in computers and robots
TLDR
This paper will offer a brief overview of the many dimensions of this new field of inquiry, including machine ethics, machine morality, artificial morality, or computational morality.
On the Morality of Artificial Agents
TLDR
There is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility, as well as the extension of the class of agents and moral agents to embrace AAs.
The Functional Morality of Robots
TLDR
The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility, if a robot passes a moral version of the Turing Test-a Moral Turing Test MTT.
Un-making artificial moral agents
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as
The responsibility gap: Ascribing responsibility for the actions of learning automata
  • A. Matthias
  • Business
    Ethics and Information Technology
  • 2004
TLDR
Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it.
...
1
2
3
4
5
...