Intelligence Explosion: Evidence and Import

@inproceedings{Muehlhauser2012IntelligenceEE,
  title={Intelligence Explosion: Evidence and Import},
  author={Luke Muehlhauser and Anna Salamon},
  year={2012}
}
In this chapter we review the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for… Expand
The AI Singularity and Runaway Human Intelligence
TLDR
This paper argues that the standard argument for the AI singularity is based on an inappropriate comparison of advanced AI to average human intelligence, and that progress in AI should be measured against the collective intelligence of the global community of human minds brought together and enhanced be smart technologies that include AI. Expand
The intelligence explosion revisited
Purpose The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the sub-discipline AI risk.Expand
Aligning Superintelligence with Human Interests: A Technical Research Agenda
The property that has given humans a dominant advantage over other species is not strength or speed, but intelligence. If progress in artificial intelligence continues unabated, AI systems willExpand
Man and Machine: Questions of Risk, Trust and Accountability in Today's AI Technology
TLDR
This paper suggests the exploration and further development of two paradigms, human intelligence-machine cooperation, and a sociological view of intelligence, which might help address some of the concerns of risk, trust and accountability in AI technology. Expand
The Singularity and Machine Ethics
Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so,Expand
Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda
In this chapter, we discuss a host of technical problems that we think AI scientists could work on to ensure that the creation of smarter-than-human machine intelligence has a positive impact.Expand
Liability for damages caused by artificial intelligence
TLDR
Factors leading to the occurrence of damage identified in the article confirm that the operation of AI is based on the pursuit of goals, which means that with its actions AI may cause damage; and thus issues of compensation will have to be addressed in accordance with the existing legal provisions. Expand
Responses to catastrophic AGI risk: a survey
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage toExpand
Artificial General Intelligence and the Human Mental Model
When the first artificial general intelligences are built, they may improve themselves to far-above-human levels. Speculations about such future entities are already affected by anthropomorphic bias,Expand
Why AI Doomsayers are Like Sceptical Theists and Why it Matters
  • J. Danaher
  • Philosophy, Computer Science
  • Minds and Machines
  • 2015
TLDR
It is argued that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God, and suggested that the modal standards for argument in the superintelligence debate need to be addressed. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 247 REFERENCES
Why an Intelligence Explosion is Probable
The hypothesis is considered that: Once an AI system with roughly human-level general intelligence is created, an “intelligence explosion” involving the relatively rapid creation of increasingly moreExpand
Thinking Inside the Box: Controlling and Using an Oracle AI
TLDR
This paper analyzes and critique various methods of controlling the AI, and suggests that an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous. Expand
The Singularity and Machine Ethics
Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so,Expand
The Singularity: a Philosophical Analysis
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machinesExpand
Implications of a Software-Limited Singularity
A number of prominent artificial intelligence (AI) researchers and commentators (Moravec 1999a; Solomonoff 1985; Vinge 1993) have presented versions of the following argument: 1. ContinuedExpand
The Quest For Artificial Intelligence: A History Of Ideas And Achievements
TLDR
In the second full paragraph of page 21, change George A. Miller's dates from " (1920—) " to " ( 1920—2012) " and move footnote #51 to occur along with footnote #50. Expand
How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects
Several authors have made the argument that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-levelExpand
Artificial Intelligence as a Positive and Negative Factor in Global Risk
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: "AExpand
The Slowdown Hypothesis
The so-called singularity hypothesis embraces the most ambitious goal of Artificial Intelligence: the possibility of constructing human-like intelligent systems. The intriguing addition is that onceExpand
Can Intelligence Explode?
TLDR
This course will provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity. Expand
...
1
2
3
4
5
...