Classification of global catastrophic risks connected with artificial intelligence

@article{Turchin2018ClassificationOG,
  title={Classification of global catastrophic risks connected with artificial intelligence},
  author={Alexey Turchin and David Denkenberger},
  journal={AI \& SOCIETY},
  year={2018},
  volume={35},
  pages={147-163}
}
A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. [...] Key Result The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.Expand
Assessing the future plausibility of catastrophically dangerous AI
TLDR
This article presents arguments that place the earliest timing of dangerous AI in the coming 10–20 years, using several partly independent sources of information, including polls and hyperbolic growth extrapolations of big history models. Expand
Global Solutions vs. Local Solutions for the AI Safety Problem
TLDR
This work explores many ideas, both old and new, regarding global solutions for AI safety, which include changing the number of AI teams, different forms of “AI Nanny”, selling AI safety solutions, and sending messages to future AI. Expand
Catastrophic Risk from Rapid Developments in Artificial Intelligence
Abstract This article describes important possible scenarios in which rapid advances in artificial intelligence (AI) pose multiple risks, including to democracy and for inter-state conflict. InExpand
AGI Safety Literature Review
TLDR
The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety, and to review the current public policy on AGI. Expand
Exploring artificial intelligence futures
Artificial intelligence technologies are receiving high levels of attention and ‘hype’, leading to a range of speculation about futures in which such technologies, and their successors, are commonlyExpand
The AI-Based Cyber Threat Landscape
TLDR
This study aims to explore existing studies of AI-based cyber attacks and to map them onto a proposed framework, providing insight into new threats, and explains how to apply this framework to analyze AI-like attacks in a hypothetical scenario of a critical smart grid infrastructure. Expand
Artificial intelligence, cyber-threats and Industry 4.0: challenges and opportunities
TLDR
This survey paper discusses opportunities and threats of using artificial intelligence (AI) technology in the manufacturing sector with consideration for offensive and defensive uses of such technology, and presents the major strengths and weaknesses of the main techniques in use. Expand
MACHINE LEARNING IN CYBER-PHYSICAL SYSTEMS AND MANUFACTURING SINGULARITY – IT DOES NOT MEAN TOTAL AUTOMATION, HUMAN IS STILL IN THE CENTRE: Part I – MANUFACTURING SINGULARITY AND AN INTELLIGENT MACHINE ARCHITECTURE
TLDR
The hypothesis presented in this paper is that there is a limit of AI/ML autonomy capacity, and that the ML algorithms will be not able to became totally autonomous and, consequently, that the human role will be indispensable. Expand
The Sustainability of Artificial Intelligence: An Urbanistic Viewpoint from the Lens of Smart and Sustainable Cities
The popularity and application of artificial intelligence (AI) are increasing rapidly all around the world—where, in simple terms, AI is a technology which mimics the behaviors commonly associatedExpand
Complexity Theory: Artificial Intelligence System Help Safety Improvement in the Next Pandemic
TLDR
This essay aims to dig deeply in complexity theory to help improve safety and reduce the impact of the next pandemic by implementing Artificial Intelligence (AI) to provide the safer complex theory with an example of the current situation of COVID-19. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 122 REFERENCES
Taxonomy of Pathways to Dangerous AI
TLDR
This work survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI, the first attempt to systematically classify types of pathways leading to malevolent AI. Expand
Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures
TLDR
It is suggested that both the frequency and the seriousness of future AI failures will steadily increase and AI Safety can be improved based on ideas developed by cybersecurity experts. Expand
Artificial Superintelligence: A Futuristic Approach
A day does not go by without a news article reporting some amazing breakthrough in artificial intelligence (AI). Many philosophers, futurists, and AI researchers have conjectured that human-level AIExpand
Existential risks: analyzing human extinction scenarios and related hazards
Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects ofExpand
Responses to catastrophic AGI risk: a survey
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage toExpand
A model of pathways to artificial superintelligence catastrophe for risk and decision analysis
TLDR
This paper presents a graphical model of major pathways to ASI catastrophe, focusing on ASI created via recursive self-improvement, using the established risk and decision analysis modelling paradigms of fault trees and influence diagrams in order to depict combinations of events and conditions that could lead to AI catastrophe. Expand
A Theory of Universal Artificial Intelligence based on Algorithmic Complexity
TLDR
This work constructs a modified algorithm AI tl, which is still eectively more intelligent than any other time t and space l bounded agent, and gives strong arguments that the resulting AI model is the most intelligent unbiased agent possible. Expand
Mammalian Value Systems
TLDR
It is argued that the notion of "mammalian value systems" points to a potential avenue for fundamental research in AI safety and AI ethics, and that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian class provide important conceptual foundations relevant to describing human values. Expand
Review of state-of-the-arts in artificial intelligence with application to AI safety problem
TLDR
Current state-of-the-arts in many areas of AI to estimate when it's reasonable to expect human level AI development, and AI safety questions are discussed. Expand
Good and safe uses of AI Oracles
TLDR
Two designs for Oracles are presented which, even under pessimistic assumptions, will not manipulate their users into releasing them and yet will still be incentivised to provide their users with helpful answers. Expand
...
1
2
3
4
5
...