• Corpus ID: 10210500

The Manifold Tangent Classifier

@inproceedings{Rifai2011TheMT,
  title={The Manifold Tangent Classifier},
  author={Salah Rifai and Yann Dauphin and Pascal Vincent and Yoshua Bengio and Xavier Muller},
  booktitle={NIPS},
  year={2011}
}
We combine three important ideas present in previous work for building classifiers: the semi-supervised hypothesis (the input distribution contains information about the classifier), the unsupervised manifold hypothesis (data density concentrates near low-dimensional manifolds), and the manifold hypothesis for classification (different classes correspond to disjoint manifolds separated by low density). We exploit a novel algorithm for capturing manifold structure (high-order contractive auto… 

Figures from this paper

Semi-supervised Learning Using an Unsupervised Atlas
TLDR
This work shows how smooth classifiers can be learnt from existing descriptions of manifolds that characterise the manifold as a set of piecewise affine charts, or an atlas, and proposes novel manifold-based kernels for semi-supervised and supervised learning.
Estimating a Manifold from a Tangent Bundle Learner
TLDR
This work focuses on the role that tangent bundle learners (TBL) can play in estimating the underlying manifold from which data is assumed to be sampled, and formulates three methods that use the data assigned to each tangent space to estimate the underlying bounded subspaces for which the tangentspace is a faithful estimate of the manifold.
Distance Learner: Incorporating Manifold Prior to Model Training
TLDR
This paper proposes a new method, Distance Learner, to incorporate the manifold hypothesis as a prior for DNN-based classifiers, and finds that it not only outperforms standard classifier by a large margin, but also performs at par with classifiers trained via state-of-the-art adversarial training.
Data-based Manifold Reconstruction via Tangent Bundle Manifold Learning
TLDR
A new geometrically motivated method for the TBML problem in which the manifold, its tangent spaces and lowdimensional representation accurately reconstructed from a sample is presented.
Tangent Bundle Manifold Learning via Grassmann&Stiefel Eigenmaps
TLDR
This work proposes an amplification of the ML, called Tangent Bundle ML, in which the proximity not only between the original manifold and its estimator but also between their tangent spaces is required.
DMRAE: discriminative manifold regularized auto-encoder for sparse and robust feature learning
TLDR
The combination of triplet loss manifold regularization with a novel denoising regularizer is injected to the objective function to generate features which are robust against perpendicular perturbation around data manifold and are sensitive enough to variation along the manifold.
The Local Dimension of Deep Manifold
TLDR
This work proposes a singular value decomposition (SVD) based approach to estimate the dimension of the deep manifolds for a typical convolutional neural network VGG19, and provides new insights for the intrinsic structure of deep neural networks.
The Riemannian Geometry of Deep Generative Models
TLDR
The Riemannian geometry of these generated manifolds is investigated and it is shown how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point.
Representation Learning: A Review and New Perspectives
TLDR
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
...
...

References

SHOWING 1-10 OF 35 REFERENCES
Non-Local Manifold Tangent Learning
We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse
Manifold Parzen Windows
TLDR
A new non-parametric kernel density estimation method which captures the local structure of an underlying manifold through the leading eigenvectors of regularized local covariance matrices, yielding classification rates similar to SVMs and much superior to the Parzen classifier.
Algorithms for manifold learning
TLDR
The motivation, background, and algorithms proposed for manifold learning are discussed and Isomap, Locally Linear Embedding, Laplacian Eigenmaps, Semidefinite Embeddings, and a host of variants of these algorithms are examined.
Non-Local Manifold Parzen Windows
TLDR
This work presents a non-local non-parametric density estimator that builds upon previously proposed Gaussian mixture models with regularized covariance matrices to take into account the local shape of the manifold.
Higher Order Contractive Auto-Encoder
TLDR
A novel regularizer when training an autoencoder for unsupervised feature extraction yields representations that are significantly better suited for initializing deep architectures than previously proposed approaches, beating state-of-the-art performance on a number of datasets.
Improved Local Coordinate Coding using Local Tangents
TLDR
This paper further develops the idea of integrating geometry in machine learning by extending the original LCC method to include local tangent directions to lead to better approximation of high dimensional nonlinear functions when the underlying data manifold is locally relatively flat.
Sample Complexity of Testing the Manifold Hypothesis
TLDR
Given upper bounds on the dimension, volume, and curvature, it is shown that Empirical Risk Minimization can produce a nearly optimal manifold using a number of random samples that is independent of the ambient dimension of the space in which data lie.
Contractive Auto-Encoders: Explicit Invariance During Feature Extraction
TLDR
It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold.
Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure
We show how to pretrain and fine-tune a multilayer neural network to learn a nonlinear transformation from the input space to a lowdimensional feature space in which K-nearest neighbour
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
TLDR
This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
...
...