How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons

@article{Maas2019HowVI,
  title={How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons},
  author={Matthijs M. Maas},
  journal={Contemporary Security Policy},
  year={2019},
  volume={40},
  pages={285 - 311}
}
  • M. Maas
  • Published 6 February 2019
  • Political Science
  • Contemporary Security Policy
ABSTRACT Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the… 
Lessons for Artificial Intelligence from Other Global Risks
The prominence of artificial intelligence (AI) as a global risk is a relatively recent phenomenon. Other global risks have longer histories and larger bodies of scholarship. The study of these other
THE REGULATION OF THE USE OF ARTIFICIAL INTELLIGENCE (AI) IN WARFARE: between International Humanitarian Law (IHL) and Meaningful Human Control
The proper principles for the regulation of autonomous weapons were studied here, some of which have already been inserted in International Humanitarian Law (IHL), and others are still merely
The Unavoidable Technology: How Artificial Intelligence Can Strengthen Nuclear Stability
TLDR
Both the United States and its NATO allies have placed new emphasis on understanding the civilian and military applications of technological advances in artificial intelligence.
Conceptualizing lethal autonomous weapon systems and their impact on the conduct of war - A study on the incentives, implementation and implications of weapons independent of human control
The thesis has aimed to study the emergence of a new weapons technology, also known as ‘killer robots’ or lethal autonomous weapon system. It seeks to answer what factors drive the development and
How (not) to stop the killer robots: A comparative analysis of humanitarian disarmament campaign strategies
ABSTRACT Whether and how Lethal Autonomous Weapons Systems (LAWS) can and should be regulated is intensely debated among governments, scholars, and campaigning activists. This article argues that the
AI, Governance Displacement, and the (De)Fragmentation of International Law
  • M. Maas
  • Law
    SSRN Electronic Journal
  • 2021
The emergence, proliferation, and use of new general-purpose technologies can often produce significant political, redistributive, normative and legal effects on the world. Artificial intelligence
Delegating strategic decision-making to machines: Dr. Strangelove Redux?
  • J. Johnson
  • Political Science
    Journal of Strategic Studies
  • 2020
ABSTRACT Will the use of artificial intelligence (AI) in strategic decision-making be stabilizing or destabilizing? What are the risks and trade-offs of pre-delegating military force to machines? How
Bridging the Gap: the case for an Incompletely Theorized Agreement on AI policy
TLDR
It is proposed that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects all the while maintaining divergent perspectives.
Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy
TLDR
It is proposed that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.
‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states
This article revisits the Cold War-era concept of ‘catalytic nuclear war,’ considered by many as unworkable, and reconceptualizes it in light of technological change, as well as improved understand...
...
1
2
3
...

References

SHOWING 1-10 OF 97 REFERENCES
It’s already too late to stop the AI arms race—We must manage it instead
ABSTRACT Can we prevent an artificial-intelligence (AI) arms race? While an ongoing campaign argues that an agreement to ban autonomous weapons can forestall AI from becoming the next domain of
The Strategic Robot Problem: Lethal Autonomous Weapons in War
The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such
Regulating for ‘normal AI accidents’— Operational lessons for the responsible governance of AI deployment
TLDR
An examination of the operational features that lead technologies to exhibit ‘normal accidents’ enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safe(r) deployment of AI systems.
Artificial Intelligence, International Competition, and the Balance of Power (May 2018)
World leaders, CEOs, and academics have suggested that a revolution in artificial intelligence is upon us. Are they right, and what will advances in artificial intelligence mean for international
Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies
Artificial intelligence technology (or AI) has developed rapidly during the past decade, and the effects of the AI revolution are already being keenly felt in many sectors of the economy. A growing
Why Do States Build Nuclear Weapons? Three Models in Search of a Bomb
  • S. Sagan
  • Political Science
    International Security
  • 1997
The central purpose of this article is to challenge conventional wisdom about nuclear proliferation. The author argues that the consensus view, focusing on national security considerations as the
An AI Race for Strategic Advantage: Rhetoric and Risks
TLDR
The potential risks of the AI race narrative and of an actual competitive race to develop AI, such as incentivising corner-cutting on safe-ty and governance, or increasing the risk of conflict are assessed.
Nonproliferation Norms: Why States Choose Nuclear Restraint
What can be learned from countries that opted out of the arms race. Too often, our focus on the relative handful of countries with nuclear weapons keeps us from asking an important question: Why do
Public opinion and the politics of the killer robots debate
The possibility that today’s drones could become tomorrow’s killer robots has attracted the attention of people around the world. Scientists and business leaders, from Stephen Hawking to Elon Musk,
The impotence of conventional arms control: why do international regimes fail when they are most needed?
ABSTRACT Amid tensions with the West over Ukraine, Russia pulled out of the Treaty on Conventional Armed Forces in Europe in March 2015. The Russian case is another example of a country disengaging
...
1
2
3
4
5
...