Corpus ID: 3039898

Why are deep nets reversible: A simple theory, with implications for training

@article{Arora2015WhyAD,
  title={Why are deep nets reversible: A simple theory, with implications for training},
  author={Sanjeev Arora and Yingyu Liang and Tengyu Ma},
  journal={ArXiv},
  year={2015},
  volume={abs/1511.05653}
}
  • Sanjeev Arora, Yingyu Liang, Tengyu Ma
  • Published 2015
  • Computer Science
  • ArXiv
  • Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples. Recent works use generative model approaches to produce the deep net's input given the value of a hidden layer several levels above. However, there is no accompanying "proof of correctness" for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input… CONTINUE READING
    46 Citations
    Reversible Architectures for Arbitrarily Deep Residual Neural Networks
    • 118
    • PDF
    Autoencoders Learn Generative Linear Models
    • 4
    • Highly Influenced
    • PDF
    M L ] 2 J un 2 01 8 Autoencoders Learn Generative Linear Models
    • Highly Influenced
    On Random Deep Weight-Tied Autoencoders: Exact Asymptotic Analysis, Phase Transitions, and Implications to Training
    • 18
    • Highly Influenced
    On the Dynamics of Gradient Descent for Autoencoders
    • 5
    • PDF
    ON RANDOM DEEP AUTOENCODERS: EXACT ASYMP-
    • 2018
    • Highly Influenced
    • PDF
    A Theoretical Framework for Target Propagation
    • 5
    • Highly Influenced
    • PDF
    Rate-Optimal Denoising with Deep Neural Networks
    • 11
    • PDF
    On the interplay of network structure and gradient convergence in deep learning
    • Vamsi K. Ithapu, S. Ravi, V. Singh
    • Computer Science, Mathematics
    • 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
    • 2016
    • 2
    • PDF

    References

    SHOWING 1-10 OF 30 REFERENCES
    Deep Generative Stochastic Networks Trainable by Backprop
    • 314
    • PDF
    Generative Adversarial Nets
    • 20,163
    • PDF
    Greedy Layer-Wise Training of Deep Networks
    • 2,589
    • PDF
    A Probabilistic Theory of Deep Learning
    • 74
    • PDF
    A Fast Learning Algorithm for Deep Belief Nets
    • 11,485
    • PDF
    Generalized Denoising Auto-Encoders as Generative Models
    • 365
    • PDF
    Learning Deep Architectures for AI
    • 6,416
    • PDF
    Provable Bounds for Learning Some Deep Representations
    • 257
    • PDF
    Deep learning and the information bottleneck principle
    • 550
    • PDF