• Corpus ID: 3244218

Discriminative Unsupervised Feature Learning with Convolutional Neural Networks

@inproceedings{Dosovitskiy2014DiscriminativeUF,
  title={Discriminative Unsupervised Feature Learning with Convolutional Neural Networks},
  author={Alexey Dosovitskiy and Jost Tobias Springenberg and Martin A. Riedmiller and Thomas Brox},
  booktitle={NIPS},
  year={2014}
}
Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. [...] Key Method Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the…Expand
Semi-supervised convolutional extreme learning machine
TLDR
The combination of the unsupervised feature learning with the ELM outperforms previous related models that use different feature representations fed into an ELM, on the CIFAR-10 and Google Street View House Number datasets.
Selective unsupervised feature learning with Convolutional Neural Network (S-CNN)
TLDR
Selective Convolutional Neural Network (S-CNN) is a simple and fast algorithm, it introduces a new way to do unsupervised feature learning, and it provides discriminative features which generalize well.
Deep Clustering for Unsupervised Learning of Visual Features
TLDR
This work presents DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features and outperforms the current state of the art by a significant margin on all the standard benchmarks.
Unsupervised Learning for Image Category Detection
We propose an unsupervised object category learning approach, where the output representations improve classification performance too. The contribution is threefold: we integrate a ’Network in
Recursive autoconvolution for unsupervised learning of convolutional neural networks
TLDR
It is shown that the recursive autoconvolution operator, adopted from physics, boosts existing unsupervised methods by learning more discriminative filters, so this work takes well established convolutional neural networks and train their filters layer-wise.
Convolutional Clustering for Unsupervised Learning
TLDR
This work proposes to train a deep convolutional network based on an enhanced version of the k-means clustering algorithm, which reduces the number of correlated parameters in the form of similar filters, and thus increases test categorization accuracy and outperforms other techniques that learn filters unsupervised.
Unsupervised Learning by Predicting Noise
TLDR
This paper introduces a generic framework to train deep networks, end-to-end, with no supervision, to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them.
Unsupervised Learning of Discriminative Attributes and Visual Representations
TLDR
This work first train a CNN coupled with unsupervised discriminative clustering, and then uses the cluster membership as a soft supervision to discover shared attributes from the clusters while maximizing their separability.
Autoconvolution for Unsupervised Feature Learning
TLDR
It is shown that the recursive autoconvolutional operator, adopted from physics, boosts existing unsupervised methods to learn more powerful filters and is designed to build a stronger classifier.
Spatial contrasting for deep unsupervised learning
TLDR
A novel approach for unsupervised training of Convolutional networks that is based on contrasting between spatial regions within images is presented, which can be employed within conventional neural networks and trained using standard techniques such as SGD and back-propagation, thus complementing supervised methods.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 38 REFERENCES
Descriptor Matching with Convolutional Neural Networks: a Comparison to SIFT
TLDR
This paper compares features from various layers of convolutional neural nets to standard SIFT descriptors and Surprisingly, convolutionAL neural networks clearly outperform SIFT on descriptor matching.
Learning Convolutional Feature Hierarchies for Visual Recognition
TLDR
This work proposes an unsupervised method for learning multi-stage hierarchies of sparse convolutional features and trains an efficient feed-forward encoder that predicts quasi-sparse features from the input.
Selecting Receptive Fields in Deep Networks
TLDR
This paper proposes a fast method to choose local receptive fields that group together those low-level features that are most similar to each other according to a pairwise similarity metric, and produces results showing how this method allows even simple unsupervised training algorithms to train successful multi-layered networks that achieve state-of-the-art results on CIFAR and STL datasets.
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
TLDR
The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features.
Improving neural networks by preventing co-adaptation of feature detectors
When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.
Learning Invariant Representations with Local Transformations
TLDR
This paper presents the transformation-invariant restricted Boltzmann machine that compactly represents data by its weights and their transformations, which achieves invariance of the feature representation via probabilistic max pooling.
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
TLDR
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
...
1
2
3
4
...