# Extracting and composing robust features with denoising autoencoders

@inproceedings{Vincent2008ExtractingAC, title={Extracting and composing robust features with denoising autoencoders}, author={Pascal Vincent and H. Larochelle and Yoshua Bengio and Pierre-Antoine Manzagol}, booktitle={ICML '08}, year={2008} }

Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. [...] Key Method This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative… Expand

#### Supplemental Content

#### 4,818 Citations

A New Training Principle for Stacked Denoising Autoencoders

- Computer Science
- 2013 Seventh International Conference on Image and Graphics
- 2013

A new training principle is introduced for unsupervised learning that makes the learned representations more efficient and useful and can obtain more robust and representative pattern of inputs than the traditional learning methods. Expand

Improving Deep Learning Accuracy with Noisy Autoencoders Embedded Perturbative Layers

- Computer Science
- ICIC
- 2016

A new training principle is presented based on denoising autoencoder and dropout training method that significantly improves learning accuracy when conducting classification experiments on benchmark data sets. Expand

Scheduled denoising autoencoders

- Computer Science, Mathematics
- ICLR
- 2015

A representation learning method that learns features at multiple different levels of scale that yields a significant boost on a later supervised task compared to the original input, or to a standard denoising autoencoder trained at a single noise level. Expand

Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion

- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2010

This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. Expand

Representation Learning with Smooth Autoencoder

- Computer Science
- ACCV
- 2014

A novel autoencoder variant, smooth autoenCoder (SmAE), to learn robust and discriminative feature representations that are consistent among local neighbors and robust to small variations of the inputs is proposed. Expand

Composite Denoising Autoencoders

- Computer Science
- ECML/PKDD
- 2016

This work introduces a novel cascaded training procedure which is designed to avoid types of bad solutions that are specific to CDAs, and shows that CDAs learn effective representations on two different image data sets. Expand

A Stacked Denoising Autoencoder Based on Supervised Pre-training

- Computer Science
- 2019

The pre-training phase of the stacked denoising autoencoder was changed from unsupervised learning to supervised learning, which can improve the accuracy of the small sample prediction problem. Expand

Discriminative Representation Learning with Supervised Auto-encoder

- Computer Science
- Neural Processing Letters
- 2018

A supervised auto-encoder is introduced that combines the reconstruction error and the classification error to form a unified objective function while taking the noisy concatenate data and label as input and demonstrates that the model outperforms many existing learning algorithms. Expand

Semi Supervised Autoencoders: Better Focusing Model Capacity during Feature Extraction

- Computer Science
- ICONIP
- 2013

This paper addresses one of the limitations when using unsupervised models like regularized autoencoders to learn features that they hope to be useful for a subsequent supervised task, namely their blindness to that specific task. Expand

Denoising auto-encoders toward robust unsupervised feature representation

- Computer Science
- 2016 International Joint Conference on Neural Networks (IJCNN)
- 2016

A robust deep neural network, named as stacked convolutional denoising auto-encoders (SCDAE), which can map raw images to hierarchical representations in an unsupervised manner and demonstrates superior performance to the state-of-the-art unsuper supervised networks. Expand

#### References

SHOWING 1-10 OF 38 REFERENCES

Sparse Feature Learning for Deep Belief Networks

- Computer Science
- NIPS
- 2007

This work proposes a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the information content of the representation, and describes a novel and efficient algorithm to learn sparse representations. Expand

Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries

- Mathematics, Computer Science
- IEEE Transactions on Image Processing
- 2006

This work addresses the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image, and uses the K-SVD algorithm to obtain a dictionary that describes the image content effectively. Expand

Efficient Learning of Sparse Representations with an Energy-Based Model

- Computer Science
- NIPS
- 2006

A novel unsupervised method for learning sparse, overcomplete features using a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Expand

Training with Noise is Equivalent to Tikhonov Regularization

- Mathematics, Computer Science
- Neural Computation
- 1995

This paper shows that for the purposes of network training, the regularization term can be reduced to a positive semi-definite form that involves only first derivatives of the network mapping. Expand

Reducing the Dimensionality of Data with Neural Networks

- Computer Science, Medicine
- Science
- 2006

This work describes an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data. Expand

An empirical evaluation of deep architectures on problems with many factors of variation

- Computer Science
- ICML '07
- 2007

A series of experiments indicate that these models with deep architectures show promise in solving harder learning problems that exhibit many factors of variation. Expand

Fields of Experts: a framework for learning image priors

- Computer Science
- 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)
- 2005

A framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks, developed using a Products-of-Experts framework. Expand

A Fast Learning Algorithm for Deep Belief Nets

- Computer Science, Medicine
- Neural Computation
- 2006

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand

A Machine Learning Framework for Adaptive Combination of Signal Denoising Methods

- Computer Science
- 2007 IEEE International Conference on Image Processing
- 2007

A general framework for combination of two distinct local denoising methods controlled by a spatially varying decision function is presented, yielding a "hybrid" Denoising algorithm whose performance surpasses that of either initial method. Expand

Greedy Layer-Wise Training of Deep Networks

- Computer Science
- 2007

These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization. Expand