Corpus ID: 208139449

Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck

@article{Manakov2019WalkingTT,
  title={Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck},
  author={Ilja Manakov and Markus Rohm and Volker Tresp},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.07460}
}
In this paper, we present an in-depth investigation of the convolutional autoencoder (CAE) bottleneck. Autoencoders (AE), and especially their convolutional variants, play a vital role in the current deep learning toolbox. Researchers and practitioners employ CAEs for a variety of tasks, ranging from outlier detection and compression to transfer and representation learning. Despite their widespread adoption, we have limited insight into how the bottleneck shape impacts the emergent properties… Expand
Universal Face Recognition Using Multiple Deep Learning Agent and Lazy Learning Algorithm
Mainstream face recognition systems have a problem regarding the disparity of recognizing faces from different races and ethnic backgrounds. This problem is caused by the imbalances in the proportionExpand
Automatic Segregation of Pelagic Habitats
It remains difficult to segregate pelagic habitats since structuring processes are dynamic on a wide range of scales and clear boundaries in the open ocean are non-existent. However, to improve ourExpand

References

SHOWING 1-10 OF 41 REFERENCES
Why Regularized Auto-Encoders learn Sparse Representation?
TLDR
This work exploits the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, it can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers. Expand
Autoencoders, Unsupervised Learning, and Deep Architectures
  • P. Baldi
  • Mathematics, Computer Science
  • ICML Unsupervised and Transfer Learning
  • 2012
TLDR
The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory. Expand
Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer
TLDR
This paper proposes a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. Expand
Generalized Denoising Auto-Encoders as Generative Models
TLDR
A different attack on the problem is proposed, which deals with arbitrary (but noisy enough) corruption, arbitrary reconstruction loss, handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise. Expand
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
TLDR
The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. Expand
Deep Image Prior
TLDR
It is shown that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Expand
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. Expand
Autoencoders Learn Generative Linear Models
TLDR
The analysis can be viewed as theoretical evidence that shallow autoencoder modules indeed can be used as feature learning mechanisms for a variety of data models, and may shed insight on how to train larger stacked architectures with autoencoders as basic building blocks. Expand
What regularized auto-encoders learn from the data-generating distribution
TLDR
It is shown that the auto-encoder captures the score (derivative of the log-density with respect to the input) and contradicts previous interpretations of reconstruction error as an energy function. Expand
The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training
TLDR
The experiments confirm and clarify the advantage of unsupervised pre- training, and empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. Expand
...
1
2
3
4
5
...