Artificial Intelligence as a Positive and Negative Factor in Global Risk

@inproceedings{Yudkowsky2006ArtificialIA,
  title={Artificial Intelligence as a Positive and Negative Factor in Global Risk},
  author={Eliezer Yudkowsky},
  year={2006}
}
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: "A curious aspect of the theory of evolution is that everybody thinks he understands it." (Monod 1974.) My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the… 
Dynamic Models Applied to Value Learning in Artificial Intelligence
TLDR
It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that the authors cannot expect an AI to develop human moral values simply because of its intelligence, as discussed in the Orthogonality Thesis.
The Singularity and Machine Ethics
Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so,
Two arguments against human-friendly AI
TLDR
It is argued that, given that the authors are capable of developing AGI, it ought to be developed with impartial, species-neutral values rather than those prioritizing friendliness to humans above all else.
Artificial Intelligence: Are we creating a new Frankenstein?
Mary Shelley (1797-1851) is an English novelist best known for her Gothic novel* Frankenstein or The Modern Prometheus, written in 1818. In this novel, Victor Frankenstein, an excellent young
Artificial Intelligence : Opportunities and Risks Policy
  • Computer Science
  • 2016
TLDR
As AI capacity improves, its field of application will grow further and the relevant algorithms will start optimizing themselves to an ever greater degree—maybe even reaching superhuman levels of intelligence.
Does artificial intelligence (AI) constitute an opportunity or a threat to the future of medicine as we know it?
  • M. Kabir
  • Computer Science
    Future Hospital Journal
  • 2019
TLDR
Artifical intelligence can approach problems as a doctor progressing through their training does: by learning rules from data, and by having the capacity to analyse massive amounts of data, algorithms are able to find correlations that the human mind cannot.
Discovering the Foundations of a Universal System of Ethics as a Road to Safe Artificial Intelligence
  • Mark R. Waser
  • Philosophy, Computer Science
    AAAI Fall Symposium: Biologically Inspired Cognitive Architectures
  • 2008
TLDR
This paper defines a universal foundation for ethics that is an attractor in the state space of intelligent behavior, giving an initial set of definitions necessary for a universal system of ethics and proposing a collaborative approach to developing an ethical system that is safe and extensible.
The errors, insights and lessons of famous AI predictions – and what they mean for the future
TLDR
The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.
Do We Need a Hippocratic Oath for Artificial Intelligence Scientists?
TLDR
A Hippocratic Oath for AI scientists may increase awareness of the potential lethal threats of AI, enhance efforts to develop safe and beneficial AI to prevent corrupt practices and manipulations and invigorate ethical codes.
Artificial intelligence: neither Utopian nor apocalyptic impacts soon
TLDR
It is concluded that despite the media hype neither massive jobs losses nor a ‘Singularity’ is imminent because current AI, based on deep learning, is expensive and difficult for most businesses to adopt, not only displaces but in fact also create jobs, and may not be the route to a super-intelligence.
...
...

References

SHOWING 1-10 OF 52 REFERENCES
Artificial Intelligence: A Modern Approach
The long-anticipated revision of this #1 selling book offers the most comprehensive, state of the art introduction to the theory and practice of artificial intelligence for modern applications.
The coming technological singularity: How to survive in the post-human era
TLDR
It is argued in this paper that the authors are on the edge of change comparable to the rise of human life on Earth and the precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
Existential risks: analyzing human extinction scenarios and related hazards
TLDR
This paper analyzes a recently emerging category: that of existential risks, threats that could case the authors' extinction or destroy the potential of Earth - originating intelligent life.
Reinforcement Learning as a Context for Integrating AI Research
TLDR
The simulation model and its role in reinforcement learning provide a context for integrating different AI subfields, and suggests a brain design partitioned into interacting learning processes, each defined by a set of inputs, an internal representation, a sets of outputs, and a reinforcement value.
Super-intelligent machines
TLDR
This book discusses the search for Super-Intelligent Machines, the ultimate engineering challenge, and the current state of the art in Machine Intelligence.
The Adapted mind : evolutionary psychology and the generation of culture
Although researchers have long been aware that the species-typical architecture of the human mind is the product of our evolutionary history, it has only been in the last three decades that advances
Cognitive biases potentially affecting judgement of global risks
All else being equal, not many people would prefer to destroy the world. Even faceless corporations, meddling governments, reckless scientists, and other agents of doom, require a world in which to
The Symbolic Species: The Co-evolution of Language and the Brain
The Symbolic Species: The Co-evolution of Language and the Brain by Terrence W. Deacon. New York: W.W. Norton, 1997, 527 pp. Reviewed by Donald Favareau University of California, Los Angeles In 866,
Global Catastrophic Risks
Acknowledgements Foreword Introduction I BACKGROUND Long-term astrophysical processes Evolution theory and the future of humanity Millenial tendencies in responses to apocalyptic threats Cognitive
The nature of selection
Abstract A model that unifies all types of selection (chemical, sociological, genetical, and every other kind of selection) may open the way to develop a general “Mathematical Theory of Selection”
...
...