A model of pathways to artificial superintelligence catastrophe for risk and decision analysis

@article{Barrett2017AMO,
  title={A model of pathways to artificial superintelligence catastrophe for risk and decision analysis},
  author={Anthony Michael Barrett and Seth D. Baum},
  journal={Journal of Experimental \& Theoretical Artificial Intelligence},
  year={2017},
  volume={29},
  pages={397 - 414}
}
  • A. Barrett, S. Baum
  • Published 25 July 2016
  • Computer Science
  • Journal of Experimental & Theoretical Artificial Intelligence
AbstractAn artificial superintelligence (ASI) is an artificial intelligence that is significantly more intelligent than humans in all respects. [] Key Method The model uses the established risk and decision analysis modelling paradigms of fault trees and influence diagrams in order to depict combinations of events and conditions that could lead to AI catastrophe, as well as intervention options that could decrease risks. The events and conditions include select aspects of the ASI itself as well as the human…
Modeling and Interpreting Expert Disagreement About Artificial Superintelligence
TLDR
An initial quantitative analysis shows that accounting for variation in expert judgment can have a large effect on estimates of the risk of ASI catastrophe, and the optimal strength of AI confinement to depend on the balance of risk parameters.
Towards an Integrated Assessment of Global Catastrophic Risk
Integrated assessment is an analysis of a topic that integrates multiple lines of research. Integrated assessments are thus inherently interdisciplinary. They are generally oriented toward practical
Value of GCR Information: Cost Effectiveness-Based Approach for Global Catastrophic Risk (GCR) Reduction
In this paper, we develop and illustrate a framework for determining the potential value of global catastrophic risk (GCR) research in reducing uncertainties in the assessment of GCR risk levels and
Modelos Dinâmicos Aplicados à Aprendizagem de Valores em Inteligência Artificial
TLDR
It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that an AI cannot expect an AI to develop human moral values simply because of its intelligence.
A Holistic Framework for Forecasting Transformative AI
TLDR
A holistic AI forecasting framework which draws on a broad body of literature from disciplines such as forecasting, technological forecasting, futures studies and scenario planning, as well as a new method that is based on scenario mapping and judgmental forecasting techniques, which form a holistic rethinking of how the authors forecast AI.
Value of Global Catastrophic Risk (GCR) Information: Cost-Effectiveness-Based Approach for GCR Reduction
In this paper, we develop and illustrate a framework for determining the potential value of global catastrophic risk (GCR) research in reducing uncertainties in the assessment of GCR levels and the
Classification of global catastrophic risks connected with artificial intelligence
TLDR
It is shown that at each level of AI’s intelligence power, separate types of possible catastrophes dominate, and that AI safety theory is complex and must be customized for each AI development level.
A Holistic Framework for Forecasting Transformative
TLDR
A holistic AI forecasting framework which draws on a broad body of literature from disciplines such as forecasting, technological forecasting, futures studies and scenario planning, as well as a new method that is based on scenario mapping and judgmental forecasting techniques, which form a holistic rethinking of how the authors forecast AI.
A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy
Artificial general intelligence (AGI) is AI that can reason across a wide range of domains. It has long been considered the “grand dream” or “holy grail” of AI. It also poses major issues of ethics,
Dynamic Models Applied to Value Learning in Artificial Intelligence
TLDR
It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that the authors cannot expect an AI to develop human moral values simply because of its intelligence, as discussed in the Orthogonality Thesis.
...
1
2
3
...

References

SHOWING 1-10 OF 68 REFERENCES
Studying first-strike stability with knowledge-based models of human decision-making
Abstract : The RAND Corporation and the RAND/UCLA Center for the Study of Soviet International Behavior (CSSIB) have a joint project for the Carnegie Corporation entitled Avoiding Nuclear War:
Catastrophe: Risk and Response
TLDR
This book discusses how to reduce the risks of catastrophe and discusses the difference cost-benefit analysis can make in the case of RHIC.
Existential Risk Prevention as Global Priority
risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their
Artificial Intelligence as a Positive and Negative Factor in Global Risk
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: "A
Small Theories and Large Risks—Is Risk Analysis Relevant for Epistemology?
  • M. Cirkovic
  • Philosophy
    Risk analysis : an official publication of the Society for Risk Analysis
  • 2012
TLDR
It is argued that there is an important methodological issue (determining what counts as the best available explanation in cases where the theories involved describe possibilities of extremely destructive global catastrophes) which has been neglected thus far and might lead to a greater weight of these cases in areas such as moral deliberation and policy making.
Analyzing and Reducing the Risks of Inadvertent Nuclear War between the United States and Russia
This paper develops a mathematical modeling framework using fault trees and Poisson processes for analyzing the risks of inadvertent nuclear war from U.S. or Russian misinterpretation of false alarms
Discovering the Foundations of a Universal System of Ethics as a Road to Safe Artificial Intelligence
  • Mark R. Waser
  • Philosophy, Computer Science
    AAAI Fall Symposium: Biologically Inspired Cognitive Architectures
  • 2008
TLDR
This paper defines a universal foundation for ethics that is an attractor in the state space of intelligent behavior, giving an initial set of definitions necessary for a universal system of ethics and proposing a collaborative approach to developing an ethical system that is safe and extensible.
Artificial General Intelligence
TLDR
The AGI containment problem is surveyed – the question of how to build a container in which tests can be conducted safely and reliably, even on AGIs with unknown motivations and capabilities that could be dangerous.
Aligning Superintelligence with Human Interests: A Technical Research Agenda
TLDR
It is essential to use caution when developing AI systems that can exceed human levels of general intelligence, or that can facilitate the creation of such systems.
A Framework for Decisions About Research with HPAI H5N1 Viruses
TLDR
The U.S. Department of Health and Human Services unveils a Framework for funding decisions about highly pathogenic avian influenza H5N1 research, which acknowledges that the virus does not appear well-adapted for sustained transmission among mammals by respiratory droplets.
...
1
2
3
4
5
...