It’s already too late to stop the AI arms race—We must manage it instead

@article{Geist2016ItsAT,
  title={It’s already too late to stop the AI arms race—We must manage it instead},
  author={Edward Geist},
  journal={Bulletin of the Atomic Scientists},
  year={2016},
  volume={72},
  pages={318 - 321}
}
  • Edward Geist
  • Published 15 August 2016
  • Political Science
  • Bulletin of the Atomic Scientists
ABSTRACT Can we prevent an artificial-intelligence (AI) arms race? While an ongoing campaign argues that an agreement to ban autonomous weapons can forestall AI from becoming the next domain of military competition, due to the historical connection between artificial-intelligence research and defense applications, an AI arms race is already well under way. Furthermore, the AI weapons challenge extends far beyond autonomous systems, as some of the riskiest military applications of artificial… 
How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons
  • M. Maas
  • Political Science
    Contemporary Security Policy
  • 2019
ABSTRACT Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal
The weaponization of artificial intelligence (AI) and its implications on the security dilemma between states: could it create a situation similar to "mutually assured destruction" (MAD)
There is no a consensus in the IR literature on the possible implications of AI for cyber or nuclear capabilities, and whether AI would exacerbate, or potentially mitigate, the security dilemma
Collective action on artificial intelligence: A primer and review
How Does Artificial Intelligence Pose an Existential Risk?
Alan Turing, one of the fathers of computing, warned that artificial intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field of AI have been
An AI Race for Strategic Advantage: Rhetoric and Risks
TLDR
The potential risks of the AI race narrative and of an actual competitive race to develop AI, such as incentivising corner-cutting on safe-ty and governance, or increasing the risk of conflict are assessed.
Artificial Intelligence, Automation, and Social Welfare: Some Ethical and Historical Perspectives on Technological Overstatement and Hyperbole
ABSTRACT The potential societal impacts of automation using intelligent control and communications technologies have emerged as topics in recent writings and public policy initiatives. Constructed
Contractors or robots? Future warfare between privatization and automation
  • A. Calcara
  • Computer Science, Political Science
    Small Wars & Insurgencies
  • 2021
TLDR
An original analysis on the interplay between the privatization of security tasks and technologically driven automation is provided and their impact on the defence industry and the armed forces is investigated.
Artificial intelligence development races in heterogeneous settings
TLDR
This work investigates how different interaction structures among race participants can alter collective choices and requirements for regulatory actions, and suggests that technology governance and regulation may profit from the world’s patent heterogeneity and inequality among firms and nations.
AI Development Race Can Be Mediated on Heterogeneous Networks
TLDR
This work investigates how different interaction structures among race participants can alter collective choices and requirements for regulatory actions and suggests that technology governance and regulation may profit from the world’s patent heterogeneity and inequality among firms and nations to design and implement meticulous interventions on a minority of participants capable of influencing an entire population towards an ethical and sustainable use of AI.
Conceptualizing lethal autonomous weapon systems and their impact on the conduct of war - A study on the incentives, implementation and implications of weapons independent of human control
The thesis has aimed to study the emergence of a new weapons technology, also known as ‘killer robots’ or lethal autonomous weapon system. It seeks to answer what factors drive the development and
...
1
2
3
...

References

SHOWING 1-10 OF 21 REFERENCES
Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993
TLDR
The SCI succeeded in fostering significant technological successes, even though it never achieved machine intelligence, and its accomplishments are evaluated and set in the context of overall computer development during this period.
Stalking the Secure Second Strike: Intelligence, Counterforce, and Nuclear Strategy
Abstract Secure second strike nuclear forces are frequently held to be easy to procure. Analysts have long argued that targeting intelligence against relocatable targets like submarine launched and
The Soviet Biological Weapons Program: A History
The Soviet Biological Weapons Program: A History. by Milton Leitenberg and Raymond A. Zilinskas (with Jens H. Kuhn), Harvard University Press, 2012. 921 pages, $55.
In semantic information processing
TLDR
For some reasons, this semantic information processing tends to be the representative book in this website.
The Asilomar Conference: A Case Study in Risk Mitigation.
  • Technical report 2015–9. Berkeley, CA: Machine Intelligence Research Institute. https://intelli gence.org/files/TheAsilomarConference.pdf
  • 2015
Superintelligence: Paths, Dangers, Strategies
Take a Stand on AI Weapons.
  • Nature
  • 2015
Memorandum for Chairman, Defense Science Board
  • 2014
Terms of Reference—Defense Science 2015 Summer Study on Autonomy.” Memorandum for Chairman, Defense Science Board, www.acq.osd.mil/dsb/ tors/TOR-2014-11-17-Summer_Study_2015_on_ Autonomy.pdf
  • 2014
...
1
2
3
...