Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction

@inproceedings{Masci2011StackedCA,
  title={Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction},
  author={Jonathan Masci and Ueli Meier and Dan C. Ciresan and J{\"u}rgen Schmidhuber},
  booktitle={ICANN},
  year={2011}
}
We present a novel convolutional auto-encoder (CAE) for unsupervised feature learning. [...] Key Result Initializing a CNN with filters of a trained CAE stack yields superior performance on a digit (MNIST) and an object recognition (CIFAR10) benchmark.Expand
Stacked Auto-Encoders for Feature Extraction with Neural Networks
TLDR
Several variants of stacked auto-encoders can serve as certain biologically plausible filters to extract effective features as the input to a particular neural network with a learning task. Expand
Pre-Training CNNs Using Convolutional Autoencoders
Despite convolutional neural networks being the state of the art in almost all computer vision tasks, their training remains a difficult task. Unsupervised representation learning using aExpand
Stacked Convolutional Autoencoder for Detecting Animal Images in Cluttered Scenes with a Novel Feature Extraction Framework
TLDR
Stacked convolutional autoencoders (SCAE) is introduced, an unsupervised stratified feature extractor that could be used for high-dimensional input images and a hybrid feature extraction technique based on Fisher Vectors and stacked autoen coders is introduced. Expand
Stacked Convolutional Denoising Auto-Encoders for Feature Representation
TLDR
An unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information is proposed, which demonstrates superior classification performance to state-of-the-art un supervised networks. Expand
Creation of a deep convolutional auto-encoder in Caffe
  • V. Turchenko, A. Luczak
  • Computer Science
  • 2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)
  • 2017
TLDR
The development of a deep (stacked) convolutional auto-encoder in the Caffe deep learning framework is presented and comparable accuracy of dimensionality reduction in comparison with a classic autoencoder on the example of MNIST dataset is shown. Expand
Denoising auto-encoders toward robust unsupervised feature representation
TLDR
A robust deep neural network, named as stacked convolutional denoising auto-encoders (SCDAE), which can map raw images to hierarchical representations in an unsupervised manner and demonstrates superior performance to the state-of-the-art unsuper supervised networks. Expand
Unsupervised Learning by Predicting Noise
TLDR
This paper introduces a generic framework to train deep networks, end-to-end, with no supervision, to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. Expand
A Deep Convolutional Auto-Encoder with Embedded Clustering
TLDR
A clustering approach embedded in a deep convolutional auto-encoder (DCAE) that simultaneously learns feature representations and cluster assignments through DCAEs in contrast to conventional clustering approaches. Expand
Convolutional adaptive denoising autoencoders for hierarchical feature extraction
TLDR
A hybrid deep learning CNN-AdapDAE model, which applies the features learned by the AdapDAe algorithm to initialize CNN filters and then train the improved CNN for classification tasks. Expand
Neural-Network-Based Feature Learning: Convolutional Neural Network
TLDR
This chapter first introduces the basic architecture of CNN, then introduces transfer feature learning to use of similarities between data, tasks, or models to apply a model that has been learned in one field to a learning problem in another field. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 33 REFERENCES
Flexible, High Performance Convolutional Neural Networks for Image Classification
We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in aExpand
High-Performance Neural Networks for Visual Object Classification
We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in aExpand
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. Expand
Stacks of convolutional Restricted Boltzmann Machines for shift-invariant feature learning
TLDR
The convolutional RBM (C-RBM), a variant of the RBM model in which weights are shared to respect the spatial structure of images, is developed, which learns a set of features that can generate the images of a specific object class. Expand
Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition
TLDR
An unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions that alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples. Expand
Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition
TLDR
The aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks, and empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Expand
Deconvolutional networks
TLDR
This work presents a learning framework where features that capture these mid-level cues spontaneously emerge from image data, based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised. Expand
Convolutional Deep Belief Networks on CIFAR-10
We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images dataset. When training a convolutional DBN, one must decide what to do with the edge pixelsExpand
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
TLDR
The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. Expand
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
TLDR
The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Expand
...
1
2
3
4
...