• Corpus ID: 7413884

What You Expect is NOT What You Get! Questioning Reconstruction/Classification Correlation of Stacked Convolutional Auto-Encoder Features

  title={What You Expect is NOT What You Get! Questioning Reconstruction/Classification Correlation of Stacked Convolutional Auto-Encoder Features},
  author={Michele Alberti and Mathias Seuret and Rolf Ingold and Marcus Liwicki},
In this paper, we thoroughly investigate the quality of features produced by deep neural network architectures obtained by stacking and convolving Auto-Encoders. [] Key Result Furthermore, experimental results suggest that there is no correlation between the reconstruction score and the quality of features for a classification task and that given the network size and configuration it is not possible to make assumptions on its training error magnitude.

Figures and Tables from this paper

Historical Document Image Segmentation with LDA-Initialized Deep Neural Networks

This paper describes how to turn an LDA into either a neural layer or a classification layer and investigates the effectiveness of LDA-based initialization for the task of layout analysis at pixel level and shows that it outperforms state-of-the-art random weight initialization methods.

Application of a Hybrid Model Based on a Convolutional Auto-Encoder and Convolutional Neural Network in Object-Oriented Remote Sensing Classification

Experimental results show that in the proposed model, the classification accuracy increases from 0.916 to 0.944, compared to a traditional convolutional neural network model; furthermore, the number of training runs is reduced, and theNumber of labelled samples can be reduced by more than half, all while ensuring a classification accuracy of no less than 0.8.

Open Evaluation Tool for Layout Analysis of Document Images

A new evaluation tool that is both available as a standalone Java application and as a RESTful web service is introduced that evaluates document segmentation at pixel level, and supports multi-labeled pixel ground truth.



Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion

This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.

Stacked convolutional auto-encoders for steganalysis of digital images

  • Shunquan TanBin Li
  • Computer Science
    Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific
  • 2014
The experimental results show that initializing a CNN with the mixture of the filters from a trained stack of convolutional auto-encoders and feature pooling layers, although still can not compete with SRM, yields superior performance compared to traditional CNN for the detection of HUGO generated stego images in BOSSBase image database.

PCA-Initialized Deep Neural Networks Applied to Document Image Analysis

This paper describes how to turn a PCA into an auto-encoder, by generating an encoder layer of the PCA parameters and furthermore adding a decoding layer, and investigates the effectiveness of PCAbased initialization for the task of layout analysis.

3D object retrieval with stacked local convolutional autoencoder

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Understanding the difficulty of training deep feedforward neural networks

The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.

A Fast Learning Algorithm for Deep Belief Nets

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

Greedy Layer-Wise Training of Deep Networks

These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

Page segmentation of historical document images with convolutional autoencoders

This paper considers page segmentation as a pixel labeling problem, i.e., each pixel is classified as either periphery, background, text block, or decoration, and applies convolutional autoencoders to learn features directly from pixel intensity values.

Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.