• Publications
  • Influence
A Fast Learning Algorithm for Deep Belief Nets
TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand
Conditional Generative Adversarial Nets
TLDR
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels. Expand
Meta-Learning with Latent Embedding Optimization
TLDR
This work shows that latent embedding optimization can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks, and indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space. Expand
Cross-Dimensional Weighting for Aggregated Deep Convolutional Features
TLDR
This work presents a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. Expand
Population Based Training of Neural Networks
TLDR
Population Based Training is presented, a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Expand
FeUdal Networks for Hierarchical Reinforcement Learning
We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, andExpand
Recursive Recurrent Nets with Attention Modeling for OCR in the Wild
TLDR
This work presents recursive recurrent neural networks with attention modeling (R2AM) for lexicon-free optical character recognition in natural scene images and validates the method with state-of-the-art performance on challenging benchmark datasets. Expand
Decoupled Neural Interfaces using Synthetic Gradients
TLDR
It is demonstrated that in addition to predicting gradients, the same framework can be used to predict inputs, resulting in models which are decoupled in both the forward and backwards pass -- amounting to independent networks which co-learn such that they can be composed into a single functioning corporation. Expand
Energy-Based Models for Sparse Overcomplete Representations
TLDR
A new way of extending independent components analysis (ICA) to overcomplete representations that defines features as deterministic (linear) functions of the inputs and assigns energies to the features through the Boltzmann distribution. Expand
An Alternative Infinite Mixture Of Gaussian Process Experts
TLDR
An infinite mixture model in which each component comprises a multivariate Gaussian distribution over an input space, and a Gaussian Process model over an output space, which leads to a more powerful and consistent Bayesian specification of the effective 'gating network' for the different experts. Expand
...
1
2
3
4
5
...