When Will AI Exceed Human Performance? Evidence from AI Experts

@article{Grace2018WhenWA,
  title={When Will AI Exceed Human Performance? Evidence from AI Experts},
  author={Katja Grace and John Salvatier and Allan Dafoe and Baobao Zhang and Owain Evans},
  journal={ArXiv},
  year={2018},
  volume={abs/1705.08807}
}
Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. [] Key Result These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.

Figures and Tables from this paper

Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers

A large survey of AI and machine learning (ML) researchers on their beliefs about progress in AI shows that, in aggregate, AI/ML researchers surveyed placed a 50% likelihood of human-level machine intelligence being achieved by 2060, suggesting more optimism about AI progress.

X-Risk Analysis for AI Research

A collection of time-tested concepts from hazard analysis and systems safety, which have been designed to steer large processes in safer directions are reviewed, to discuss how AI researchers can realistically have long-term impacts on the safety of AI systems.

Dynamic Cognition Applied to Value Learning in Artificial Intelligence

  • N. D. OliveiraN. Corrêa
  • Computer Science
    Aoristo - International Journal of Phenomenology, Hermeneutics and Metaphysics
  • 2021
It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that an AI cannot expect an AI to develop the authors' moral preferences simply because of its intelligence.

Forecasting Transformative AI: An Expert Survey

The findings suggest that AI experts expect major advances in AI technology to continue over the next decade to a degree that will likely have profound transformative impacts on society.

Dynamic Models Applied to Value Learning in Artificial Intelligence

It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that the authors cannot expect an AI to develop human moral values simply because of its intelligence, as discussed in the Orthogonality Thesis.

Towards Safe Artificial General Intelligence

The central conclusion is that while reinforcement learning systems as designed today are inherently unsafe to scale to human levels of intelligence, there are ways to potentially address many of these issues without straying too far from the currently so successful reinforcement learning paradigm.

Why AI is harder than we think

This talk will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field, and speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable --- in short, more intelligent.

Challenges of Aligning Artificial Intelligence with Human Values

It is shown that although it is not realistic to reach an agreement on what humans really want as people value different things and seek different ends, it may be possible to agree on what the authors do not want to happen, considering the possibility that intelligence, equal to their own, or even exceeding it, can be created.

What influences attitudes about artificial intelligence adoption: Evidence from U.S. local officials

It is found that self-reported familiarity with AI is correlated with increased approval of AI uses in a variety of areas, including facial recognition, natural disaster impact planning, and even military surveillance.

Towards Strong AI

It is argued that AI research should set a stronger focus on learning CGPMs of the hidden causes that lead to the registered observations, so that AI may develop that will be able to explain the reality it is confronted with, reason about it, and find adaptive solutions, making it Strong AI.
...

References

SHOWING 1-10 OF 19 REFERENCES

Future Progress in Artificial Intelligence: A Survey of Expert Opinion

What the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing are clarified are clarified.

How Long Until Human-Level AI ? Results from an Expert Assessment

An assessment of expert opinions regarding humanlevel AI research conducted at AGI-09, a conference for this AI specialty, finds that various experts strongly disagree with each other on certain matters, such as timing and ordering of key milestones, but that most experts expect human-level AI to be reached within upcoming decades.

Expert and Non-expert Opinion About Technological Unemployment

  • T. Walsh
  • Psychology
    Int. J. Autom. Comput.
  • 2018
While the experts predicted a significant number of occupations were at risk of automation in the next two decades, they were more cautious than people outside the field in predicting occupations at risk, and public expectations may need to be dampened about the speed of progress in robotics and AI.

One Hundred Year Study on Artificial Intelligence: Reflections and Framing

The One Hundred Year Study on Artificial Intelligence (AI) has its roots in a one-year study on Long-term AI Futures that we commissioned during my term of service as president of the Association for

Superintelligence: Paths, Dangers, Strategies

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles

Robotics and the Lessons of Cyberlaw

What the introduction of a new, equally transformative technology means for cyberlaw and policy is examined in the first to examine what robotics has a different set of essential qualities than the Internet and, accordingly, will raise distinct legal issues.

How predictable is technological progress

This work forms Moore's law as a correlated geometric random walk with drift, and derives a closed form expression approximating the distribution of forecast errors as a function of time, making it possible to collapse the forecast errors for many different technologies at different time horizons onto the same universal distribution.

The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predictions

It is found that teams and prediction markets systematically outperformed averages of individual forecasters, that training forecasters helps, and that the exact form of how predictions are combined has a large effect on overall prediction accuracy.

Expert Political Judgment: How Good Is It? How Can We Know?

Author: Philip E. Tetlock is a psychologist who is Professor of Leadership at the Haas School of Business at the University of California, Berkeley. The book combines several of his research

Book Review: The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies

On the surface, this is a very important book about present and future technologies, jobs, and growing inequality. It is clearly written, plausible, and well-documented. Although oriented to American