• Corpus ID: 236088008

Reasoning-Modulated Representations

@article{Velivckovic2021ReasoningModulatedR,
  title={Reasoning-Modulated Representations},
  author={Petar Velivckovi'c and Matko Bovsnjak and Thomas Kipf and Alexander Lerchner and Raia Hadsell and Razvan Pascanu and Charles Blundell},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.08881}
}
Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g 

Figures and Tables from this paper

References

SHOWING 1-10 OF 59 REFERENCES

Combined Reinforcement Learning via Abstract Representations

It is shown that the modularity brought by this approach leads to good generalization while being computationally efficient, with planning happening in a smaller latent state space, which opens up new strategies for interpretable AI, exploration and transfer learning.

Recurrent Independent Mechanisms

Recurrent Independent Mechanisms is proposed, a new recurrent architecture in which multiple groups of recurrent cells operate with nearly independent transition dynamics, communicate only sparingly through the bottleneck of attention, and are only updated at time steps where they are most relevant.

Relational recurrent neural networks

A new memory module -- a \textit{Relational Memory Core} (RMC) -- is used which employs multi-head dot product attention to allow memories to interact and achieves state-of-the-art results on the WikiText-103, Project Gutenberg, and GigaWord datasets.

Unsupervised State Representation Learning in Atari

This work introduces a method that learns state representations by maximizing mutual information across spatially and temporally distinct features of a neural encoder of the observations and introduces a new benchmark based on Atari 2600 games to evaluate representations based on how well they capture the ground truth state variables.

On the Binding Problem in Artificial Neural Networks

This paper proposes a unifying framework that revolves around forming meaningful entities from unstructured sensory inputs, maintaining this separation of information at a representational level (representation), and using these entities to construct new inferences, predictions, and behaviors (composition).

Neural algorithmic reasoning

The Power and Limits of Deep Learning

  • Yann LeCun
  • Physics
    Research-Technology Management
  • 2018
Artificial intelligence (AI) is advancing very rapidly. I’ve had a front-row seat for a lot of the recent progress—first at Bell Labs (which was renamed AT&T Labs in 1996, while I was there) and th...

Neural Production Systems

This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.

Contrastive Learning of Structured World Models

These experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.

Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions

This work presents a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion and incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently.
...