• Publications
  • Influence
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. Expand
Representation Learning: A Review and New Perspectives
This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. Expand
Contractive Auto-Encoders: Explicit Invariance During Feature Extraction
We present in this paper a novel approach for training deterministic auto-encoders. Expand
Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering
This paper provides a unified framework for extending Local Linear Embedding (LLE), Isomap, Laplacian Eigenmaps, Multi-Dimensional Scaling (for dimensionality reduction) as well as Spectral Clustering. Expand
Visualizing Higher-Layer Features of a Deep Network
We show that, perhaps counter-intuitively, such interpretation is possible at the unit level, that it is simple to accomplish and that the results are consistent across various techniques. Expand
Kernel Matching Pursuit
Matching Pursuit algorithms learn a function that is a weighted sum of basis functions, by sequentially appending functions to an initially empty basis, to approximate a target function in the least-squares sense. Expand
A Connection Between Score Matching and Denoising Autoencoders
  • Pascal Vincent
  • Mathematics, Computer Science
  • Neural Computation
  • 1 July 2011
We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. Expand
K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms
We give a possible geometrical intuition as to why K-Nearest Neighbor (KNN) algorithms often perform more poorly than SVMs on classification tasks, and propose modified KNN algorithms to overcome the perceived problem. Expand
Learning Eigenfunctions Links Spectral Embedding and Kernel PCA
We show a direct relation between spectral embedding methods and kernel principal components analysis and how both are special cases of a more general learning problem: learning the principal eigenfunctions of an operator defined from a kernel and the unknown data-generating density. Expand
Generalized Denoising Auto-Encoders as Generative Models
We propose a probabilistic interpretation of regularized auto-encoders as models of the underlying data-generating distribution when the data are discrete. Expand