Why are deep nets reversible: A simple theory, with implications for training
@article{Arora2015WhyAD, title={Why are deep nets reversible: A simple theory, with implications for training}, author={Sanjeev Arora and Yingyu Liang and Tengyu Ma}, journal={ArXiv}, year={2015}, volume={abs/1511.05653} }
Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples.
Recent works use generative model approaches to produce the deep net's input given the value of a hidden layer several levels above. However, there is no accompanying "proof of correctness" for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input… CONTINUE READING
Figures and Topics from this paper
Figures
Paper Mentions
47 Citations
Reversible Architectures for Arbitrarily Deep Residual Neural Networks
- Computer Science, Mathematics
- AAAI
- 2018
- 122
- PDF
Autoencoders Learn Generative Linear Models
- Computer Science, Mathematics
- ArXiv
- 2018
- 4
- Highly Influenced
- PDF
Invertibility of Convolutional Generative Networks from Partial Measurements
- Computer Science
- NeurIPS
- 2018
- 20
- PDF
On Random Deep Weight-Tied Autoencoders: Exact Asymptotic Analysis, Phase Transitions, and Implications to Training
- Computer Science
- ICLR
- 2019
- 18
- Highly Influenced
A Theoretical Framework for Target Propagation
- Computer Science, Mathematics
- NeurIPS
- 2020
- 8
- Highly Influenced
- PDF
On the interplay of network structure and gradient convergence in deep learning
- Computer Science, Mathematics
- 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
- 2016
- 2
- PDF
References
SHOWING 1-10 OF 30 REFERENCES
Deep Generative Stochastic Networks Trainable by Backprop
- Mathematics, Computer Science
- ICML
- 2014
- 314
- PDF
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2010
- 4,725
- PDF
Deep learning and the information bottleneck principle
- Computer Science, Mathematics
- 2015 IEEE Information Theory Workshop (ITW)
- 2015
- 576
- PDF