• Corpus ID: 27899067

A Pitfall of Unsupervised Pre-Training

@article{Alberti2017APO,
  title={A Pitfall of Unsupervised Pre-Training},
  author={Michele Alberti and Mathias Seuret and Rolf Ingold and Marcus Liwicki},
  journal={ArXiv},
  year={2017},
  volume={abs/1712.01655}
}
The point of this paper is to question typical assumptions in deep learning and suggest alternatives. A particular contribution is to prove that even if a Stacked Convolutional Auto-Encoder is good at reconstructing pictures, it is not necessarily good at discriminating their classes. When using Auto-Encoders, intuitively one assumes that features which are good for reconstruction will also lead to high classification accuracy. Indeed, it became research practice and is a suggested strategy by… 

Figures and Tables from this paper

Pretraining Image Encoders without Reconstruction via Feature Prediction Loss

This work investigates three methods for calculating loss for autoencoder-based pretraining of image encoders, and proposes a feature prediction loss, which is no need for the time-consuming task of decoding the entire image, hence the name “feature prediction loss”.

Improving Image Autoencoder Embeddings with Perceptual Loss

The results show that the embeddings generated by autoencoders trained with perceptual loss enable more accurate predictions than those trained with element-wise loss and, on the task of object positioning of a small-scale feature, perceptual loss can improve the results by a factor 10.

Leveraging Random Label Memorization for Unsupervised Pre-Training

A novel approach to leverage large unlabeled datasets by pre-training state-of-the-art deep neural networks on randomly-labeled datasets and using these pre-trained networks as a starting point for regular supervised learning.

Generating Synthetic Handwritten Historical Documents With OCR Constrained GANs

A framework to generate synthetic historical documents with precise ground truth using nothing more than a collection of unlabeled historical images is presented and a high-quality synthesis is demonstrated that makes it possible to generate large labeled historical document datasets with preciseGround truth.

Pre-Training by Completing Point Clouds

It is demonstrated that OcCo learns representations that improve the semantic understandings as well as generalization on downstream tasks over prior methods, transfer to different datasets, reduce training time and improve label efficiency.

The Utility of Unsupervised Machine Learning in Anatomic Pathology.

Unsupervised machine learning techniques such as clustering, GANs, and autoencoders, used individually or in combination, may help address the lack of annotated data in pathology and improve the process of developing supervised learning models.

LSSL: Longitudinal Self-Supervised Learning

This paper applies its model, named Longitudinal Self-Supervised Learning (LSSL), to two longitudinal neuroimaging studies to show its unique advantage in extracting the `brain-age' information from the data and in revealing informative characteristics associated with neurodegenerative and neuropsychological disorders.

References

SHOWING 1-10 OF 23 REFERENCES

Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion

This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

PCA-Initialized Deep Neural Networks Applied to Document Image Analysis

This paper describes how to turn a PCA into an auto-encoder, by generating an encoder layer of the PCA parameters and furthermore adding a decoding layer, and investigates the effectiveness of PCAbased initialization for the task of layout analysis.

Stacked convolutional auto-encoders for steganalysis of digital images

  • Shunquan TanBin Li
  • Computer Science
    Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific
  • 2014
The experimental results show that initializing a CNN with the mixture of the filters from a trained stack of convolutional auto-encoders and feature pooling layers, although still can not compete with SRM, yields superior performance compared to traditional CNN for the detection of HUGO generated stego images in BOSSBase image database.

Selecting Autoencoder Features for Layout Analysis of Historical Documents

This paper finds that a significant number of autoencoder features are redundant or irrelevant for the classification, and the method cascades adapted versions of two conventional methods to increase the classification accuracy and reduce the feature dimension.

Greedy Layer-Wise Training of Deep Networks

These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

Historical Document Image Segmentation with LDA-Initialized Deep Neural Networks

This paper describes how to turn an LDA into either a neural layer or a classification layer and investigates the effectiveness of LDA-based initialization for the task of layout analysis at pixel level and shows that it outperforms state-of-the-art random weight initialization methods.

A Fast Learning Algorithm for Deep Belief Nets

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

3D object retrieval with stacked local convolutional autoencoder

Deep Learning

Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.