• Corpus ID: 236088008

# Reasoning-Modulated Representations

@article{Velivckovic2021ReasoningModulatedR,
title={Reasoning-Modulated Representations},
author={Petar Velivckovi'c and Matko Bovsnjak and Thomas Kipf and Alexander Lerchner and Raia Hadsell and Razvan Pascanu and Charles Blundell},
journal={ArXiv},
year={2021},
volume={abs/2107.08881}
}
• Published 19 July 2021
• Computer Science
• ArXiv
Neural networks leverage robust internal representations in order to generalise. Learning them is difﬁcult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g

## References

SHOWING 1-10 OF 59 REFERENCES

• Computer Science
AAAI
• 2019
It is shown that the modularity brought by this approach leads to good generalization while being computationally efficient, with planning happening in a smaller latent state space, which opens up new strategies for interpretable AI, exploration and transfer learning.
• Computer Science
ICLR
• 2021
Recurrent Independent Mechanisms is proposed, a new recurrent architecture in which multiple groups of recurrent cells operate with nearly independent transition dynamics, communicate only sparingly through the bottleneck of attention, and are only updated at time steps where they are most relevant.
• Computer Science
NeurIPS
• 2018
A new memory module -- a \textit{Relational Memory Core} (RMC) -- is used which employs multi-head dot product attention to allow memories to interact and achieves state-of-the-art results on the WikiText-103, Project Gutenberg, and GigaWord datasets.
• Computer Science
NeurIPS
• 2019
This work introduces a method that learns state representations by maximizing mutual information across spatially and temporally distinct features of a neural encoder of the observations and introduces a new benchmark based on Atari 2600 games to evaluate representations based on how well they capture the ground truth state variables.
• Computer Science
ArXiv
• 2020
This paper proposes a unifying framework that revolves around forming meaningful entities from unstructured sensory inputs, maintaining this separation of information at a representational level (representation), and using these entities to construct new inferences, predictions, and behaviors (composition).
• Yann LeCun
• Physics
Research-Technology Management
• 2018
Artificial intelligence (AI) is advancing very rapidly. I’ve had a front-row seat for a lot of the recent progress—first at Bell Labs (which was renamed AT&T Labs in 1996, while I was there) and th...
• Computer Science
NeurIPS
• 2021
This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.
• Computer Science
ICLR
• 2020
These experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.
• Computer Science
ICLR
• 2018
This work presents a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion and incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently.