Classification of global catastrophic risks connected with artificial intelligence

@article{Turchin2018ClassificationOG,
  title={Classification of global catastrophic risks connected with artificial intelligence},
  author={Alexey Turchin and David C. Denkenberger},
  journal={AI \& SOCIETY},
  year={2018},
  volume={35},
  pages={147-163}
}
A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. [] Key Result The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.

Global Solutions vs. Local Solutions for the AI Safety Problem

TLDR
This work explores many ideas, both old and new, regarding global solutions for AI safety, which include changing the number of AI teams, different forms of “AI Nanny”, selling AI safety solutions, and sending messages to future AI.

Catastrophic Risk from Rapid Developments in Artificial Intelligence

Abstract This article describes important possible scenarios in which rapid advances in artificial intelligence (AI) pose multiple risks, including to democracy and for inter-state conflict. In

AGI Safety Literature Review

TLDR
The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety, and to review the current public policy on AGI.

Artificial Intelligence at Work: An Overview of the Literature

This paper provides an overview of the actual and likely labour market transformations caused by increasing use of Artificial Intelligence (AI) technologies across the advanced economies, with a

The Human Factor in AI Safety

TLDR
This work explores the issue of unsafe outcomes of AI from a Human-AI interaction lens during AI deployment how the interaction of individuals and AI during its deployment brings new concerns, which need a solid and holistic mitigation plan.

Exploring artificial intelligence futures

TLDR
The paper points at several tools as particularly promising and currently neglected, calling for more work in data-driven, realistic, integrative, and participatory scenario role-plays.

Public Perception of Artificial Intelligence and Its Connections to the Sustainable Development Goals

Artificial Intelligence (AI) will not just change our lives but bring about revolutionary transformation. AI can augment efficiencies of good and bad things and thus has been considered both an

The AI-Based Cyber Threat Landscape

TLDR
This study aims to explore existing studies of AI-based cyber attacks and to map them onto a proposed framework, providing insight into new threats, and explains how to apply this framework to analyze AI-like attacks in a hypothetical scenario of a critical smart grid infrastructure.

Artificial intelligence, cyber-threats and Industry 4.0: challenges and opportunities

TLDR
This survey paper discusses opportunities and threats of using artificial intelligence (AI) technology in the manufacturing sector with consideration for offensive and defensive uses of such technology, and presents the major strengths and weaknesses of the main techniques in use.

References

SHOWING 1-10 OF 120 REFERENCES

Taxonomy of Pathways to Dangerous AI

TLDR
This work survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI, the first attempt to systematically classify types of pathways leading to malevolent AI.

Artificial Intelligence as a Positive and Negative Factor in Global Risk

By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: "A

Artificial Superintelligence: A Futuristic Approach

TLDR
Artificial Superintelligence: A Futuristic Approach is designed to become a foundational text for the new science of AI safety engineering and should be an invaluable resource for AI researchers and students, computer security researchers, futurists, and philosophers.

Existential risks: analyzing human extinction scenarios and related hazards

TLDR
This paper analyzes a recently emerging category: that of existential risks, threats that could case the authors' extinction or destroy the potential of Earth - originating intelligent life.

Responses to catastrophic AGI risk: a survey

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to

A model of pathways to artificial superintelligence catastrophe for risk and decision analysis

TLDR
This paper presents a graphical model of major pathways to ASI catastrophe, focusing on ASI created via recursive self-improvement, using the established risk and decision analysis modelling paradigms of fault trees and influence diagrams in order to depict combinations of events and conditions that could lead to AI catastrophe.

A Theory of Universal Artificial Intelligence based on Algorithmic Complexity

TLDR
This work constructs a modified algorithm AI tl, which is still eectively more intelligent than any other time t and space l bounded agent, and gives strong arguments that the resulting AI model is the most intelligent unbiased agent possible.

Mammalian Value Systems

TLDR
It is argued that the notion of "mammalian value systems" points to a potential avenue for fundamental research in AI safety and AI ethics, and that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian class provide important conceptual foundations relevant to describing human values.

Review of state-of-the-arts in artificial intelligence with application to AI safety problem

TLDR
Current state-of-the-arts in many areas of AI to estimate when it's reasonable to expect human level AI development, and AI safety questions are discussed.

Good and safe uses of AI Oracles

TLDR
Two designs for Oracles are presented which, even under pessimistic assumptions, will not manipulate their users into releasing them and yet will still be incentivised to provide their users with helpful answers.
...