Artificial moral agents are infeasible with foreseeable technologies

  title={Artificial moral agents are infeasible with foreseeable technologies},
  author={P. Hew},
  journal={Ethics and Information Technology},
  • P. Hew
  • Published 2014
  • Computer Science
  • Ethics and Information Technology
  • For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility. 
    28 Citations

    Topics from this paper.

    Ethical Decision Making in Robots: Autonomy, Trust and Responsibility - Autonomy Trust and Responsibility
    • 30
    • PDF
    AI Ethics, Security and Privacy
    Moral responsibility in mixed human-machine teams
    Can we program or train robots to be good?
    • A. Sharkey
    • Psychology, Computer Science
    • Ethics and Information Technology
    • 2017
    • 5
    • PDF
    Autonomous reboot: Aristotle, autonomy and the ends of machine ethics


    Prolegomena to any future artificial moral agent
    • 202
    • Highly Influential
    • PDF
    Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture part I: Motivation and philosophy
    • R. Arkin
    • Computer Science
    • 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI)
    • 2008
    • 202
    • PDF
    Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches
    • 144
    • Highly Influential
    • PDF
    The ethics of designing artificial agents
    • 44
    • Highly Influential
    Ethics and consciousness in artificial agents
    • 55
    Implementing moral decision making faculties in computers and robots
    • 23
    • PDF
    On the Morality of Artificial Agents
    • 534
    • PDF
    The Functional Morality of Robots
    • 13
    • PDF
    Un-making artificial moral agents
    • 41
    • Highly Influential
    The responsibility gap: Ascribing responsibility for the actions of learning automata
    • A. Matthias
    • Sociology
    • Ethics and Information Technology
    • 2004
    • 232