Corpus ID: 226227092

# Generating Correct Answers for Progressive Matrices Intelligence Tests

@article{Pekar2020GeneratingCA,
title={Generating Correct Answers for Progressive Matrices Intelligence Tests},
author={Niv Pekar and Yaniv Benny and Lior Wolf},
journal={ArXiv},
year={2020},
volume={abs/2011.00496}
}
• Published 1 November 2020
• Computer Science
• ArXiv
Raven's Progressive Matrices are multiple-choice intelligence tests, where one tries to complete the missing location in a $3\times 3$ grid of abstract images. Previous attempts to address this test have focused solely on selecting the right answer out of the multiple choices. In this work, we focus, instead, on generating a correct answer given the grid, without seeing the choices, which is a harder task, by definition. The proposed neural model combines multiple advances in generative models… Expand
1 Citations

#### Figures and Tables from this paper

How much intelligence is there in artificial intelligence? A 2020 update
• Psychology
• Intelligence
• 2021
Abstract Schank (1980) wrote an editorial for Intelligence on “How much intelligence is there in artificial intelligence?”. In this paper, we revisit this question. We start with a short overview ofExpand

#### References

SHOWING 1-10 OF 15 REFERENCES
Learning Perceptual Inference by Contrasting
• Computer Science
• NeurIPS
• 2019
It is demonstrated that CoPINet sets the new state-of-the-art for permutation-invariant models on two major datasets and concludes that spatial-temporal reasoning depends on envisaging the possibilities consistent with the relations between objects and can be solved from pixel-level inputs. Expand
Raven Progressive Matrices
• Psychology
• 2003
The Raven Progressive Matrices (RPM) tests measure “general cognitive ability” or, better, eductive, or “meaning making,” ability (Raven, Raven, & Court, 1998a,2000). The term “eductive” comes fromExpand
Abstract Reasoning with Distracting Features
• Computer Science
• NeurIPS
• 2019
This paper proposes feature robust abstract reasoning (FRAR) model, which consists of a reinforcement learning based teacher network to determine the sequence of training and a student network for predictions that is able to beat the state-of-the-art models. Expand
Measuring abstract reasoning in neural networks
• Computer Science, Mathematics
• ICML
• 2018
A dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test, is proposed and ways to both measure and induce stronger abstract reasoning in neural networks are introduced. Expand
What one intelligence test measures: a theoretical account of the processing in the Raven Progressive Matrices Test.
• Computer Science, Psychology
• Psychological review
• 1990
The cognitive processes in a widely used, nonverbal test of analytic intelligence, the Raven Progressive Matrices Test (Raven, 1962), are analyzed in terms of which processes distinguish betweenExpand
Scale-Localized Abstract Reasoning
• Computer Science
• 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
A modified version of the RAVen dataset is proposed, named RAVEN-FAIR, which outperforms the existing state of the art in this task on all benchmarks by 5-54% and proposes a new way to pool information along the rows and the columns of the illustration-grid of the query. Expand
RAVEN: A Dataset for Relational and Analogical Visual REasoNing
• Computer Science
• 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
This work proposes a new dataset, built in the context of Raven's Progressive Matrices (RPM) and aimed at lifting machine intelligence by associating vision with structural, relational, and analogical reasoning in a hierarchical representation and establishes a semantic link between vision and reasoning by providing structure representation. Expand
Auto-Encoding Variational Bayes
• Mathematics, Computer Science
• ICLR
• 2014
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Improved Techniques for Training GANs
• Computer Science, Mathematics
• NIPS
• 2016
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations
• Computer Science, Mathematics
• NIPS 2018
• 2018
It is shown that the latent representations, learned by unsupervised training using the right objective function, significantly outperform the same architectures trained with purely supervised learning, especially when it comes to generalization. Expand