Extracting and composing robust features with denoising autoencoders

@inproceedings{Vincent2008ExtractingAC,
  title={Extracting and composing robust features with denoising autoencoders},
  author={Pascal Vincent and H. Larochelle and Yoshua Bengio and Pierre-Antoine Manzagol},
  booktitle={ICML '08},
  year={2008}
}
Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. [...] Key Method This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative…Expand
4,496 Citations
A New Training Principle for Stacked Denoising Autoencoders
  • 4
  • Highly Influenced
Scheduled denoising autoencoders
  • 29
  • Highly Influenced
  • PDF
Representation Learning with Smooth Autoencoder
  • 8
  • Highly Influenced
  • PDF
Composite Denoising Autoencoders
  • 3
  • Highly Influenced
  • PDF
Discriminative Representation Learning with Supervised Auto-encoder
  • 6
Denoising auto-encoders toward robust unsupervised feature representation
  • 4
...
1
2
3
4
5
...

References

SHOWING 1-4 OF 4 REFERENCES
Greedy Layer-Wise Training of Deep Networks
  • 1,498
  • Highly Influential
Sparse deep belief net model for visual area V2
  • 988
  • Highly Influential
  • PDF
Extracting and composing robust features with denoising autoencoders (Technical Report 1316)
  • Université de Montréal, dept. IRO
  • 2008
Learning deep architectures for AI (Technical Report 1312)
  • Université de Montréal, dept. IRO
  • 2007