• Publications
  • Influence
A simple neural network module for relational reasoning
TLDR
This work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
Relational inductive biases, deep learning, and graph networks
TLDR
It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.
Measuring abstract reasoning in neural networks
TLDR
A dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test, is proposed and ways to both measure and induce stronger abstract reasoning in neural networks are introduced.
Meta-Learning with Memory-Augmented Neural Networks
TLDR
The ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples is demonstrated.
One-shot Learning with Memory-Augmented Neural Networks
TLDR
The ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples is demonstrated.
Relational recurrent neural networks
TLDR
A new memory module -- a \textit{Relational Memory Core} (RMC) -- is used which employs multi-head dot product attention to allow memories to interact and achieves state-of-the-art results on the WikiText-103, Project Gutenberg, and GigaWord datasets.
Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures
TLDR
Results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance are presented and implementation details help establish baselines for biologically motivated deep learning schemes going forward.
Hyperbolic Attention Networks
TLDR
This work introduces hyperbolic attention networks to endow neural networks with enough capacity to match the complexity of data with hierarchical and power-law structure and re-expressing the ubiquitous mechanism of soft attention in terms of operations defined for hyperboloid and Klein models.
Deep reinforcement learning with relational inductive biases
TLDR
The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases, which can offer advantages in efficiency, generalization, and interpretability.
Backpropagation and the brain
TLDR
It is argued that the key principles underlying backprop may indeed have a role in brain function and induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain.
...
1
2
3
4
5
...