• Publications
  • Influence
A Neural Probabilistic Language Model
TLDR
This work proposes to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences.
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.
Theano: A Python framework for fast computation of mathematical expressions
TLDR
The performance of Theano is compared against Torch7 and TensorFlow on several machine learning models and recently-introduced functionalities and improvements are discussed.
Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription
TLDR
A probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences that outperforms many traditional models of polyphonic music on a variety of realistic datasets is introduced.
Why Does Unsupervised Pre-training Help Deep Learning?
TLDR
The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre- training.
A Variational Inequality Perspective on Generative Adversarial Nets
TLDR
This work applies averaging, extrapolation and a computationally cheaper variant that is called extrapolation from the past to the stochastic gradient method (SGD) and Adam and proposes to extend techniques designed for variational inequalities to the training of GANs.
Combining modality specific deep neural networks for emotion recognition in video
In this paper we present the techniques used for the University of Montréal's team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotions
Audio Chord Recognition with Recurrent Neural Networks
TLDR
An efficient algorithm to search for the global mode of the output distribution while taking long-term dependencies into account is devised and the resulting method is competitive with state-of-the-art approaches on the MIREX dataset in the major/minor prediction task.
EmoNets: Multimodal deep learning approaches for emotion recognition in video
TLDR
This paper explores multiple methods for the combination of cues from these modalities into one common classifier, which achieves a considerably greater accuracy than predictions from the strongest single-modality classifier.
RATM: Recurrent Attentive Tracking Model
TLDR
The proposed RATM performs well on all three tasks and can generalize to related but previously unseen sequences from a challenging tracking data set.
...
1
2
3
4
5
...