• Corpus ID: 207870692

On the Measure of Intelligence

@article{Chollet2019OnTM,
  title={On the Measure of Intelligence},
  author={Franccois Chollet},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.01547}
}
  • F. Chollet
  • Published 5 November 2019
  • Computer Science
  • ArXiv
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions… 

Performance vs. competence in human–machine comparisons

  • C. Firestone
  • Psychology
    Proceedings of the National Academy of Sciences
  • 2020
Focusing on the domain of image classification, three factors contributing to the species-fairness of human–machine comparisons are identified, extracted from recent work that equates superficial constraints on demonstrating that knowledge.

AI, visual imagery, and a case study on the challenges posed by human intelligence tests

  • M. Kunda
  • Psychology
    Proceedings of the National Academy of Sciences
  • 2020
This work examines how artificial agents, instead of being designed manually by AI researchers, might learn portions of their own knowledge and reasoning procedures from experience, including learning visuospatial domain knowledge, learning and generalizing problem-solving strategies, and learning the actual definition of the task in the first place.

Toward the quantification of cognition

Three realms of formidable constraints -- a) measurable human cognitive abilities, b) measurable allometric anatomic brain characteristics, and c) measurable features of specific automata and formal grammars -- illustrate remarkably sharp restrictions on human abilities, unexpectedly confining human cognition to a specific class of automata which are markedly below Turing machines.

Grounding Artificial Intelligence in the Origins of Human Behavior

This paper proposes a framework highlighting the role of environmental complexity in open-ended skill acquisition, grounded in major hypotheses from HBE and recent contributions in Reinforcement learning, and uses this framework to highlight fundamental links between the two disciplines and identify feedback loops that bootstrap ecological complexity and create promising research directions for AI researchers.

Shortcut Learning in Deep Neural Networks

A set of recommendations for model interpretation and benchmarking is developed, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.

Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment

An effective solution based on meta-alignment, a novel method to align representation spaces using strongly and weakly paired cross-modal data while ensuring quick generalization to new tasks across different modalities is proposed.

Same-different conceptualization: a machine vision perspective

Neural Abstract Reasoner

This work introduces the Neural Abstract Reasoner (NAR), a memory augmented architecture capable of learning and using abstract rules, and provides some intuition for the effects of spectral regularization in the domain of abstract reasoning based on theoretical generalization bounds and Solomonoff's theory of inductive inference.

Human $\neq$ AGI.

This paper proves that humans are not general intelligences, and widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified.

Turning 30: New Ideas in Inductive Logic Programming

This work focuses on new methods for learning recursive programs that generalise from few examples, a shift from using hand-crafted background knowledge to learning background knowledge, and the use of different technologies, notably answer set programming and neural networks.

References

SHOWING 1-10 OF 105 REFERENCES

The Measure of All Minds: Evaluating Natural and Artificial Intelligence

Using algorithmic information theory as a foundation, the book elaborates on the evaluation of perceptual, developmental, social, verbal and collective features and critically analyzes what the future of intelligence might look like.

Evaluation in artificial intelligence: from task-oriented to ability-oriented measurement

This paper critically assess the different ways AI systems are evaluated, and the role of components and techniques in these systems, and identifies three kinds of evaluation: human discrimination, problem benchmarks and peer confrontation.

Building machines that learn and think like people

It is argued that truly human-like learning and thinking machines should build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems, and harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations.

The Newell Test for a theory of cognition

12 criteria that the human cognitive architecture would have to satisfy in order to be functional are distilled into 12 criteria: flexible behavior, real-time performance, adaptive behavior, vast knowledge base, dynamic behavior, knowledge integration, natural language, learning, development, evolution, and brain realization.

The BICA Cognitive Decathlon: A Test Suite for Biologically-Inspired Cognitive Agents

BICA (Biologically-Inspired Cognitive Architectures) is a DARPA Phase-I program whose goal is to create the next generation of cognitive architecture models based on principles of psychology and

The Animal-AI Environment: Training and Testing Animal-Like Artificial Cognition

This work presents an environment that keeps all the positive elements of standard gaming environments, but is explicitly designed for the testing of animal-like artificial cognition.

I-athlon: Towards A Multidimensional Turing Test

A methodology for designing a test that consists of a series of events, analogous to the Olympic Decathlon, which complies with the requirements of the Turing test is proposed, which is intended to ultimately enable the community to evaluate progress towards machine intelligence in a practical and repeatable way.

Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence

Pamela McCorduck first went among the artificial intelligentsia when the field was fresh and new, and asked the scientists engaged in it what they were doing and why. She saw artificial intelligence
...