Artificial Intelligence as a Positive and Negative Factor in Global Risk
@inproceedings{Yudkowsky2006ArtificialIA, title={Artificial Intelligence as a Positive and Negative Factor in Global Risk}, author={Eliezer Yudkowsky}, year={2006} }
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: "A curious aspect of the theory of evolution is that everybody thinks he understands it." (Monod 1974.) My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the…
335 Citations
Dynamic Models Applied to Value Learning in Artificial Intelligence
- Computer ScienceArXiv
- 2020
It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that the authors cannot expect an AI to develop human moral values simply because of its intelligence, as discussed in the Orthogonality Thesis.
The Singularity and Machine Ethics
- Psychology
- 2012
Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so,…
Two arguments against human-friendly AI
- PhilosophyAI Ethics
- 2021
It is argued that, given that the authors are capable of developing AGI, it ought to be developed with impartial, species-neutral values rather than those prioritizing friendliness to humans above all else.
Artificial Intelligence: Are we creating a new Frankenstein?
- Art
- 2019
Mary Shelley (1797-1851) is an English novelist best known for her Gothic novel* Frankenstein or The Modern Prometheus, written in 1818. In this novel, Victor Frankenstein, an excellent young…
Artificial Intelligence : Opportunities and Risks Policy
- Computer Science
- 2016
As AI capacity improves, its field of application will grow further and the relevant algorithms will start optimizing themselves to an ever greater degree—maybe even reaching superhuman levels of intelligence.
Does artificial intelligence (AI) constitute an opportunity or a threat to the future of medicine as we know it?
- Computer ScienceFuture Hospital Journal
- 2019
Artifical intelligence can approach problems as a doctor progressing through their training does: by learning rules from data, and by having the capacity to analyse massive amounts of data, algorithms are able to find correlations that the human mind cannot.
Discovering the Foundations of a Universal System of Ethics as a Road to Safe Artificial Intelligence
- Philosophy, Computer ScienceAAAI Fall Symposium: Biologically Inspired Cognitive Architectures
- 2008
This paper defines a universal foundation for ethics that is an attractor in the state space of intelligent behavior, giving an initial set of definitions necessary for a universal system of ethics and proposing a collaborative approach to developing an ethical system that is safe and extensible.
The errors, insights and lessons of famous AI predictions – and what they mean for the future
- Computer ScienceJ. Exp. Theor. Artif. Intell.
- 2014
The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.
Do We Need a Hippocratic Oath for Artificial Intelligence Scientists?
- Computer ScienceAI Mag.
- 2021
A Hippocratic Oath for AI scientists may increase awareness of the potential lethal threats of AI, enhance efforts to develop safe and beneficial AI to prevent corrupt practices and manipulations and invigorate ethical codes.
Artificial intelligence: neither Utopian nor apocalyptic impacts soon
- Computer Science
- 2020
It is concluded that despite the media hype neither massive jobs losses nor a ‘Singularity’ is imminent because current AI, based on deep learning, is expensive and difficult for most businesses to adopt, not only displaces but in fact also create jobs, and may not be the route to a super-intelligence.
References
SHOWING 1-10 OF 52 REFERENCES
Artificial Intelligence: A Modern Approach
- Computer Science
- 1995
The long-anticipated revision of this #1 selling book offers the most comprehensive, state of the art introduction to the theory and practice of artificial intelligence for modern applications.…
The coming technological singularity: How to survive in the post-human era
- Computer Science
- 1993
It is argued in this paper that the authors are on the edge of change comparable to the rise of human life on Earth and the precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
Existential risks: analyzing human extinction scenarios and related hazards
- Computer Science
- 2002
This paper analyzes a recently emerging category: that of existential risks, threats that could case the authors' extinction or destroy the potential of Earth - originating intelligent life.
Reinforcement Learning as a Context for Integrating AI Research
- PsychologyAAAI Technical Report
- 2004
The simulation model and its role in reinforcement learning provide a context for integrating different AI subfields, and suggests a brain design partitioned into interacting learning processes, each defined by a set of inputs, an internal representation, a sets of outputs, and a reinforcement value.
Super-intelligent machines
- Computer Science, ArtCOMG
- 2001
This book discusses the search for Super-Intelligent Machines, the ultimate engineering challenge, and the current state of the art in Machine Intelligence.
The Adapted mind : evolutionary psychology and the generation of culture
- Psychology
- 1992
Although researchers have long been aware that the species-typical architecture of the human mind is the product of our evolutionary history, it has only been in the last three decades that advances…
Cognitive biases potentially affecting judgement of global risks
- Political Science
- 2008
All else being equal, not many people would prefer to destroy the world. Even faceless corporations, meddling governments, reckless scientists, and other agents of doom, require a world in which to…
The Symbolic Species: The Co-evolution of Language and the Brain
- Linguistics
- 1998
The Symbolic Species: The Co-evolution of Language and the Brain by Terrence W. Deacon. New York: W.W. Norton, 1997, 527 pp. Reviewed by Donald Favareau University of California, Los Angeles In 866,…
Global Catastrophic Risks
- Physics
- 2008
Acknowledgements Foreword Introduction I BACKGROUND Long-term astrophysical processes Evolution theory and the future of humanity Millenial tendencies in responses to apocalyptic threats Cognitive…
The nature of selection
- Economics
- 1995
Abstract A model that unifies all types of selection (chemical, sociological, genetical, and every other kind of selection) may open the way to develop a general “Mathematical Theory of Selection”…