The future of war: could lethal autonomous weapons make conflict more ethical?

  title={The future of war: could lethal autonomous weapons make conflict more ethical?},
  author={Steven Umbrello and Phil Torres and Angelo F. De Bellis},
  journal={AI \& SOCIETY},
Lethal Autonomous Weapons (LAWs) are robotic weapon systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapon systems that make decisions within the bounds of their ethics-based code. To ensure… 
A Taste of Armageddon: A Virtue Ethics Perspective on Autonomous Weapons and Moral Injury
ABSTRACT Autonomous weapon systems (AWS) could in principle release military personnel from the onus of killing during combat missions, reducing the related risk of suffering a moral injury and its
Smart soldiers: towards a more ethical warfare
It is a truism that, due to human weaknesses, human soldiers have yet to have sufficiently ethical warfare. It is arguable that the likelihood of human soldiers to breach the Principle of
THE REGULATION OF THE USE OF ARTIFICIAL INTELLIGENCE (AI) IN WARFARE: between International Humanitarian Law (IHL) and Meaningful Human Control
The proper principles for the regulation of autonomous weapons were studied here, some of which have already been inserted in International Humanitarian Law (IHL), and others are still merely
Securitisation as a Norm-Setting Framing in The Campaign to Stop Killer Robots
Since 2009, International Relations scholars have researched the role of big advocacy groups in giving access to the Campaign to Stop Killer Robots in the United Nations Convention on Certain
A consideration of how emerging military leaders perceive themes in the autonomous weapon system discourse
ABSTRACT The rapidly emerging scholarly literature responding to autonomous weapon systems has come to dominate our perceptions of future warfare. Scientists, governments, militaries, and civil
Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles
This research explores how decision matrix algorithms, via the belief-desire-intention model for autonomous vehicles, can be designed to minimize the risks of opaque architectures and adopts the Value Sensitive Design approach as a principled framework for the incorporation of such values within design.
Meaningful human control of drones: exploring human–machine teaming, informed by four different ethical perspectives
This work explored a human-centric approach to the design and deployment of highly autonomous, unarmed Unmanned Aerial Vehicle, or drone, and an associated Decision Support System (DSS), for the drone’s operator, and explores how Human–Machine Teaming, through such a DSS, can promote Meaningful Human Control of the drone.


Lethal Autonomous Weapon Systems under International Humanitarian Law
Robots formerly belonged to the realm of fiction, but are now becoming a practical issue for the disarmament community. While some believe that military robots could act more ethically than human
The Strategic Robot Problem: Lethal Autonomous Weapons in War
The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such
Saying ‘No!’ to Lethal Autonomous Targeting
Abstract Plans to automate killing by using robots armed with lethal weapons have been a prominent feature of most US military forces’ roadmaps since 2004. The idea is to have a staged move from
Stopping ‘ Killer Robots ’ : Why Now Is the Time to Ban Autonomous Weapons Systems
Since 2013, discussion of such weapons has been climbing the arms control agenda of the United Nations. They are a topic at the Human Rights Council and the General Assembly First Committee on
Means and Methods of the Future: Autonomous Systems
Autonomous systems will fundamentally alter the way wars are waged. In particular, autonomous weapon systems, capable of selecting and engaging targets without direct human operator involvement,
The case for banning killer robots
  • R. Arkin
  • Political Science
    Commun. ACM
  • 2015
In the future autonomous robots may be able to outperform humans from an ethical perspective under battlefield conditions for numerous reasons, including their ability to act conservatively, and to integrate more information from more sources far faster than a human possibly could in real time before responding with lethal force.
Abstract While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the
How Just Could a Robot War Be
While modern states may never cease to wage war against one another, they have recognized moral restrictions on how they conduct those wars. These “rules of war” serve several important functions in
The Case for Regulating Fully Autonomous Weapons
On April 22, 2013, organizations across the world banded together to launch the Campaign to Stop Killer Robots. Advocates called for a ban on fully autonomous weapons (FAWs), robotic systems that can
No One at the Controls: The Legal Implications of Fully Autonomous Targeting
Lethal autonomous robots (LARs) may provide the best counter to the asymmetric threats of the future. From China’s considerable capacity for jamming and general cyber attack to swarms of Iranian