A Manifold Learning Perspective on Representation Learning: Learning Decoder and Representations without an Encoder

@article{Schuster2021AML,
  title={A Manifold Learning Perspective on Representation Learning: Learning Decoder and Representations without an Encoder},
  author={Viktoria Schuster and Anders Krogh},
  journal={Entropy},
  year={2021},
  volume={23}
}
Autoencoders are commonly used in representation learning. They consist of an encoder and a decoder, which provide a straightforward method to map n-dimensional data in input space to a lower m-dimensional representation space and back. The decoder itself defines an m-dimensional manifold in input space. Inspired by manifold learning, we showed that the decoder can be trained on its own by learning the representations of the training samples along with the decoder weights using gradient descent… 
1 Citations

Figures and Tables from this paper

The deep generative decoder: Using MAP estimates of representations
TLDR
This work argues that it is worthwhile to investigate a much simpler approximation which finds representations and their distribution by maximizing the model likelihood via back-propagation, and calls it a Deep Generative Decoder (DGD).

References

SHOWING 1-10 OF 26 REFERENCES
Laplacian Auto-Encoders: An explicit learning of nonlinear data manifold
Representation Learning: A Review and New Perspectives
TLDR
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Contractive Auto-Encoders: Explicit Invariance During Feature Extraction
TLDR
It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold.
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.
Recent Advances in Autoencoder-Based Representation Learning
TLDR
An in-depth review of recent advances in representation learning with a focus on autoencoder-based models and makes use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
Alternating Back-Propagation for Generator Network
TLDR
It is shown that the alternating back-propagation algorithm can learn realistic generator models of natural images, video sequences, and sounds and can also be used to learn from incomplete or indirect training data.
A Cost Function for Internal Representations
TLDR
A cost function for learning in feed-forward neural networks is introduced which is an explicit function of the internal representation in addition to the weights which can then be formulated as two simple perceptrons and a search for internal representations.
...
1
2
3
...