Denoising autoencoder with modulated lateral connections learns invariant representations of natural images
@article{Rasmus2015DenoisingAW, title={Denoising autoencoder with modulated lateral connections learns invariant representations of natural images}, author={Antti Rasmus and Tapani Raiko and Harri Valpola}, journal={CoRR}, year={2015}, volume={abs/1412.7210} }
Suitable lateral connections between encoder and decoder are shown to allow higher layers of a denoising autoencoder (dAE) to focus on invariant representations. In regular autoencoders, detailed information needs to be carried through the highest layers but lateral connections from encoder to decoder relieve this pressure. It is shown that abstract invariant features can be translated to detailed reconstructions when invariant features are allowed to modulate the strength of the lateral…
Figures and Tables from this paper
19 Citations
Lateral Connections in Denoising Autoencoders Support Supervised Learning
- Computer ScienceArXiv
- 2015
The proposed model is trained to minimize simultaneously the sum of supervised and unsupervised cost functions by back-propagation, avoiding the need for layer-wise pretraining in the permutation-invariant MNIST classification task.
A study on the similarities of Deep Belief Networks and Stacked Autoencoders
- Computer Science
- 2015
The author has dedicated part of the thesis to study how the three deep networks in exam form their deep internal representations and how similar these can be to each other, and a novel approach for the evaluation of internal representations is presented with the name of F-Mapping.
Learning from minimally labeled data with accelerated convolutional neural networks
- Computer Science
- 2016
Algorithms that learn from unlabeled data are studied, and state-of-the-art results on common benchmarks are achieved.
Theta-RBM: Unfactored Gated Restricted Boltzmann Machine for Rotation-Invariant Representations
- Computer ScienceArXiv
- 2016
This paper proposes the Theta-Restricted Boltzmann Machine, which builds upon the original RBM formulation and injects the notion of rotation-invariance during the learning procedure, and reaches an invariance score of ~90% on mnist-rot dataset.
BLAN: Bi-directional ladder attentive network for facial attribute prediction
- Computer SciencePattern Recognit.
- 2020
Unsupervised Rotation Factorization in Restricted Boltzmann Machines
- Computer ScienceIEEE Transactions on Image Processing
- 2020
This paper presents an extended novel RBM that learns rotation invariant features by explicitly factorizing for rotation nuisance in 2D image inputs within an unsupervised framework and shows that this method outperforms the current state-of-the-art RBM approaches on three different benchmark datasets.
Semi-Supervised Learning with Ladder Network
- Computer ScienceArXiv
- 2015
This work builds on top of the Ladder network proposed by Valpola (2015) which is extended by combining the model with supervision, and shows that the resulting model reaches state-of-the-art performance in various tasks.
Semi-supervised Learning with Ladder Networks
- Computer ScienceNIPS
- 2015
This work builds on top of the Ladder network proposed by Valpola which is extended by combining the model with supervision and shows that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutation-invariant MNIST classification with all labels.
Ladder Networks: Learning under Massive Label Deficit
- Computer Science
- 2017
This work discusses how the ladder network model successfully combines supervised and unsupervised learning taking it beyond the pre-training realm, and extends the results by lowering the number of labels.
A semi-supervised convolutional neural network for hyperspectral image classification
- Computer Science, Environmental Science
- 2017
A novel semi-supervised convolutional neural network is proposed for the classification of hyperspectral image that can automatically learn features from complex hyperspectRAL image data structures and simultaneously minimize the sum of supervised and unsupervised cost functions.
References
SHOWING 1-10 OF 25 REFERENCES
Extracting and composing robust features with denoising autoencoders
- Computer ScienceICML '08
- 2008
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.
Deep Learning Made Easier by Linear Transformations in Perceptrons
- Computer ScienceAISTATS
- 2012
The usefulness of the transformations are confirmed, which make basic stochastic gradient learning competitive with state-of-the-art learning algorithms in speed and that they seem also to help find solutions that generalize better.
Generalized Denoising Auto-Encoders as Generative Models
- Computer ScienceNIPS
- 2013
A different attack on the problem is proposed, which deals with arbitrary (but noisy enough) corruption, arbitrary reconstruction loss, handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise.
Measuring Invariances in Deep Networks
- Computer ScienceNIPS
- 2009
A number of empirical tests are proposed that directly measure the degree to which these learned features are invariant to different input transformations and find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images and convolutional deep belief networks learn substantially more invariant Features in each layer.
Gradient-based learning of higher-order image features
- Computer Science2011 International Conference on Computer Vision
- 2011
This work shows how one can cast the problem of learning higher-order features as the issue of learning a parametric family of manifolds, which allows a variant of a de-noising autoencoder network to learn higher-orders using simple gradient based optimization.
ImageNet classification with deep convolutional neural networks
- Computer ScienceCommun. ACM
- 2012
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces
- MathematicsNeural Computation
- 2000
It is shown that the same principle of independence maximization can explain the emergence of phase- and shift-invariant features, similar to those found in complex cells, by maximizing the independence between norms of projections on linear subspaces.
Learning Multiple Layers of Features from Tiny Images
- Computer Science
- 2009
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Emergence of simple-cell receptive field properties by learning a sparse code for natural images
- Computer ScienceNature
- 1996
It is shown that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex.