Corpus ID: 308212

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

@inproceedings{Coates2011AnAO,
  title={An Analysis of Single-Layer Networks in Unsupervised Feature Learning},
  author={Adam Coates and A. Ng and Honglak Lee},
  booktitle={AISTATS},
  year={2011}
}
A great deal of research has focused on algorithms for learning features from unlabeled data. [...] Key Method Specifically, we will apply several othe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks.Expand
C-SVDDNet: An Effective Single-Layer Network for Unsupervised Feature Learning
TLDR
An alternative network architecture with much smaller number of nodes but with much finer pooling size, hence emphasizing the local details of the object is explored, which is also extended with multiple receptive field scales and multiple pooling sizes. Expand
Convolutional Clustering for Unsupervised Learning
TLDR
This work proposes to train a deep convolutional network based on an enhanced version of the k-means clustering algorithm, which reduces the number of correlated parameters in the form of similar filters, and thus increases test categorization accuracy and outperforms other techniques that learn filters unsupervised. Expand
Hierarchical Extreme Learning Machine for unsupervised representation learning
TLDR
Compared to traditional deep learning methods, the proposed trans-layer representation method with ELM-AE based learning of local receptive filters has much faster learning speed and is validated in several typical experiments, such as digit recognition on MNIST and MNIST variations, object recognition on Caltech 101. Expand
Selecting Receptive Fields in Deep Networks
TLDR
This paper proposes a fast method to choose local receptive fields that group together those low-level features that are most similar to each other according to a pairwise similarity metric, and produces results showing how this method allows even simple unsupervised training algorithms to train successful multi-layered networks that achieve state-of-the-art results on CIFAR and STL datasets. Expand
Unsupervised representation learning based on the deep multi-view ensemble learning
TLDR
This work proposes a novel deep multi-view ensemble model that restricts the number of connections between successive layers while enhancing discriminatory power using a data-driven approach to deal with feature learning problems. Expand
A New Method of Multi-Scale Receptive Fields Learning
TLDR
This paper proposes a method to limit the number of features by multi-scale receptive fields (MSRF) learning and can choose the most effective receptive fields in multiple scales, which will improve classification performance in the object recognition task. Expand
Building high-level features using large scale unsupervised learning
TLDR
Contrary to what appears to be a widely-held intuition, the experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Expand
Unsupervised and Transfer Learning Challenge: a Deep Learning Approach
TLDR
This paper describes different kinds of layers the authors trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge, and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples. Expand
A linear approach for sparse coding by a two-layer neural network
TLDR
The overall results suggest that linear encoders can be profitably used to obtain sparse data representations in the context of machine learning problems, provided that an appropriate error function is used during the learning phase. Expand
Deep Learning using Support Vector Machines
TLDR
This paper proposes to train all layers of the deep networks by backpropagating gradients through the top level SVM, learning features of all layers, and demonstrates a small but consistent advantage of replacing softmax layer with a linear support vector machine. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 40 REFERENCES
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand
Sparse Feature Learning for Deep Belief Networks
TLDR
This work proposes a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the information content of the representation, and describes a novel and efficient algorithm to learn sparse representations. Expand
Measuring Invariances in Deep Networks
TLDR
A number of empirical tests are proposed that directly measure the degree to which these learned features are invariant to different input transformations and find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images and convolutional deep belief networks learn substantially more invariant Features in each layer. Expand
Convolutional Deep Belief Networks on CIFAR-10
We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images dataset. When training a convolutional DBN, one must decide what to do with the edge pixelsExpand
Sparse deep belief net model for visual area V2
TLDR
An unsupervised learning model is presented that faithfully mimics certain properties of visual area V2 and the encoding of these more complex "corner" features matches well with the results from the Ito & Komatsu's study of biological V2 responses, suggesting that this sparse variant of deep belief networks holds promise for modeling more higher-order features. Expand
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. Expand
Efficient Learning of Sparse Representations with an Energy-Based Model
TLDR
A novel unsupervised method for learning sparse, overcomplete features using a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Expand
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
TLDR
The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Expand
Modeling pixel means and covariances using factorized third-order boltzmann machines
TLDR
This approach provides a probabilistic framework for the widely used simple-cell complex-cell architecture, it produces very realistic samples of natural images and it extracts features that yield state-of-the-art recognition accuracy on the challenging CIFAR 10 dataset. Expand
A Fast Learning Algorithm for Deep Belief Nets
TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand
...
1
2
3
4
...