Corpus ID: 102352052

On the Learnability of Deep Random Networks

@article{Das2019OnTL,
  title={On the Learnability of Deep Random Networks},
  author={A. Das and S. Gollapudi and Ravi Kumar and R. Panigrahy},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.03866}
}
In this paper we study the learnability of deep random networks from both theoretical and practical points of view. On the theoretical front, we show that the learnability of random deep networks with sign activation drops exponentially with its depth. On the practical front, we find that the learnability drops sharply with depth even with the state-of-the-art training methods, suggesting that our stylized theoretical results are closer to reality. 
6 Citations

Figures and Topics from this paper

A Deep Conditioning Treatment of Neural Networks
  • 2
  • Highly Influenced
  • PDF
Learning Boolean Circuits with Neural Networks
  • 2
  • Highly Influenced
  • PDF
Hardness of Learning Neural Networks with Natural Weights
  • 5
  • PDF
From Local Pseudorandom Generators to Hardness of Learning
  • PDF
High-Fidelity Extraction of Neural Network Models
  • 25
High Accuracy and High Fidelity Extraction of Neural Networks
  • 50
  • PDF

References

SHOWING 1-10 OF 22 REFERENCES
When is a Convolutional Filter Easy To Learn?
  • 101
  • PDF
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
  • 240
  • PDF
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
  • 348
  • PDF
Recovery Guarantees for One-hidden-layer Neural Networks
  • 216
  • PDF
Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks
  • 23
  • PDF
Learning Non-overlapping Convolutional Neural Networks with Multiple Kernels
  • 57
  • PDF
Learning Depth-Three Neural Networks in Polynomial Time
  • 48
  • PDF
Learning Halfspaces and Neural Networks with Random Initialization
  • 35
  • PDF
Convergence Results for Neural Networks via Electrodynamics
  • 21
  • PDF
...
1
2
3
...