Corpus ID: 237532536

AI, orthogonality and the M\"uller-Cannon instrumental vs general intelligence distinction

  title={AI, orthogonality and the M\"uller-Cannon instrumental vs general intelligence distinction},
  author={Olle Haggstrom},
The by now standard argument put forth by Yudkowsky, Bostrom and others for why the possibility of a carelessly handled AI breakthrough poses an existential threat to humanity is shown through careful conceptual analysis to be very much alive and kicking, despite the suggestion in a recent paper by Müller and Cannon that the argument contains a flaw. 


Challenges to the Omohundro–Bostrom framework for AI motivations
PurposeThis paper aims to contribute to the futurology of a possible artificial intelligence (AI) breakthrough, by reexamining the Omohundro–Bostrom theory for instrumental vs final AI goals. DoesExpand
The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents
  • N. Bostrom
  • Psychology, Computer Science
  • Minds and Machines
  • 2012
The relation between intelligence and motivation in artificial agents is discussed, developing and briefly arguing for two theses that help understand the possible range of behavior of superintelligent agents and point to some potential dangers in building such an agent. Expand
Thinking in Advance About the Last Algorithm We Ever Need to Invent (Keynote Speakers)
Key issues include when and how suddenly superintelligence is likely to emerge, the goals and motivations of a superintelligent machine, and what the authors can do to improve the chances of a favorable outcome. Expand
The Basic AI Drives
This paper identifies a number of “drives” that will appear in sufficiently advanced AI systems of any design and discusses how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity. Expand
Human Compatible: Artificial Intelligence and the Problem of Control
"The most important book I have read in quite some time" (Daniel Kahneman); "A must-read" (Max Tegmark); "The book we've all been waiting for" (Sam Harris) LONGLISTED FOR THE 2019 FINANCIAL TIMES ANDExpand
An AGI Modifying Its Utility Function in Violation of the Strong Orthogonality Thesis
An artificial general intelligence (AGI) might have an instrumental drive to modify its utility function to improve its ability to cooperate, bargain, promise, threaten, and resist and engage inExpand
Artificial Intelligence as a Positive and Negative Factor in Global Risk
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: "AExpand
Universal Intelligence: A Definition of Machine Intelligence
A number of well known informal definitions of human intelligence are taken, and mathematically formalised to produce a general measure of intelligence for arbitrary machines that formally captures the concept of machine intelligence in the broadest reasonable sense. Expand
Existential risk from AI and orthogonality: Can we have it both ways?
This is an open access article under the terms of the Creat ive Commo ns Attri butio nNonCo mmerc ialNoDerivs License, which permits use and distribution in any medium, provided the original work isExpand
An overview of 11 proposals for building safe advanced AI
This paper analyzes and compares 11 different proposals for building safe advanced AI under the current machine learning paradigm, including major contenders such as iterated amplification, AI safetyExpand