On the moral responsibility of military robots

  title={On the moral responsibility of military robots},
  author={Thomas Hellstr{\"o}m},
  journal={Ethics and Information Technology},
  • T. Hellström
  • Published 1 June 2013
  • Computer Science
  • Ethics and Information Technology
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with… 

Negotiating autonomy and responsibility in military robots

This paper examines different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots.

Towards ethical robots: Revisiting Braitenberg's vehicles

The development of artificial ethical agents could both mitigate some fears about the future of autonomous AI, and provide insight into human moral reasoning, as well as explore the related work, including the current attempts at simulating ethics.

Lethal military robots : who is responsible when things go wrong?

Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some

Lethal Military Robots

Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some

Ethical Decision Making in Robots: Autonomy, Trust and Responsibility - Autonomy Trust and Responsibility

It is argued that for people to trust autonomous robots to be able to explain the reasons for their decisions, they need to know which ethical principles they are applying and that their application is deterministic and predictable.

Robots and Respect: Assessing the Case Against Autonomous Weapon Systems

  • R. Sparrow
  • Philosophy
    Ethics & International Affairs
  • 2016
There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them

Instrumental Robots

This paper concedes that current AI will possess supervised agency, but argues that it is nevertheless wrong to think of the relevant human-AI interactions as a form of collaborative agency and, hence, that responsibility in cases of collaborativeagency is not the right place to look for the responsibility-grounding relation in human- AI interactions.

Ethical issues in service robotics and artificial intelligence

  • R. Belk
  • Business, Computer Science
    The Service Industries Journal
  • 2020
Views of service contexts involving robotics and AI, with important implications for public policy and applications of service technologies, are expanded.

Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement

When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At

A model of autonomy for artificial agents

An increasing amount of tasks and responsibilities are being delegated to artificial agents. In areas such as healthcare, traffic, the household, and the military, artificial agents are being adopted



Sharing Moral Responsibility with Robots: A Pragmatic Approach

This article argues for a pragmatic approach, where responsibility is seen as a social regulatory mechanism in artificial intelligent systems and claims that having a system which takes care of certain tasks intelligently, learning from experience and making autonomous decisions gives us reasons to talk about a system as being “responsible” for a task.

Governing Lethal Behavior in Autonomous Robots

Drawing from the authors own state-of-the-art research, this book examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and

Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task

The results suggest that humanoid robots may be appropriate for settings in which people have to delegate responsibility to these robots or when the task is too demanding for people to do, and when complacency is not a major concern.

Behavior-Based Systems

This chapter is to explain behavior-based systems and their use in autonomous control problems and applications, and provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior- based control.

Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction

  • T. KimP. Hinds
  • Psychology
    ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication
  • 2006
It is found that when a robot is more autonomous, people attribute more credit and blame to the robot and less toward themselves and other participants and transparency has a greater effect in decreasing the attribution of blame.

Ethical robots in warfare

  • R. Arkin
  • Art
    IEEE Technology and Society Magazine
  • 2009
It is my contention that robots can be built that do not exhibit fear, anger, frustration, or revenge, and that ultimately they behave in a more humane manner than even human beings in these harsh circumstances and severe duress.

Why Machine Ethics?

Machine ethics is an emerging field that seeks to implement moral decision-making faculties in computers and robots that violate ethical standards as a matter of course.

The responsibility gap: Ascribing responsibility for the actions of learning automata

  • A. Matthias
  • Business
    Ethics and Information Technology
  • 2004
Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it.

A model for types and levels of human interaction with automation

A model for types and levels of automation is outlined that can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation.

Rehabilitation or Revenge: Prosecuting Child Soldiers for Human Rights Violations

International law provides no explicit guidelines for whether or at what age child soldiers should be prosecuted for grave violations of international humanitarian and human rights law such as