• Publications
  • Influence
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Asynchronous Methods for Deep Reinforcement Learning
TLDR
A conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers and shows that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input. Expand
Conditional Generative Adversarial Nets
TLDR
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels. Expand
Maxout Networks
TLDR
A simple new model called maxout is defined designed to both facilitate optimization by dropout and improve the accuracy of dropout's fast approximate model averaging technique. Expand
Theano: A Python framework for fast computation of mathematical expressions
TLDR
The performance of Theano is compared against Torch7 and TensorFlow on several machine learning models and recently-introduced functionalities and improvements are discussed. Expand
Challenges in representation learning: A report on three machine learning contests
TLDR
The datasets created for these challenges are described, the results of the competitions are summarized, and some comments are provided on what kind of knowledge can be gained from machine learning competitions. Expand
An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks
TLDR
It is found that it is always best to train using the dropout algorithm--the drop out algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes. Expand
Multi-Prediction Deep Boltzmann Machines
TLDR
The multi-prediction deep Boltzmann machine does not require greedy layerwise pretraining, and outperforms the standard DBM at classification, classification with missing inputs, and mean field prediction tasks. Expand
Combining modality specific deep neural networks for emotion recognition in video
In this paper we present the techniques used for the University of Montréal's team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotionsExpand
Pylearn2: a machine learning research library
TLDR
A brief history of the library, an overview of its basic philosophy, a summary of the Library's architecture, and a description of how the Pylearn2 community functions socially are given. Expand
...
1
2
3
...