Corpus ID: 15688894

L1-regularized Neural Networks are Improperly Learnable in Polynomial Time

@inproceedings{Zhang2016L1regularizedNN,
  title={L1-regularized Neural Networks are Improperly Learnable in Polynomial Time},
  author={Yuchen Zhang and J. Lee and Michael I. Jordan},
  booktitle={ICML},
  year={2016}
}
We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has k hidden layers and that the l1-norm of the incoming weights of any neuron is bounded by L. We present a kernel-based method, such that with probability at least 1 - δ, it learns a predictor whose generalization error is at most e worse than that of the neural network. The sample complexity and the time complexity of the presented method are polynomial in the input dimension and in… Expand
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
On the Learnability of Fully-Connected Neural Networks
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
SGD Learns the Conjugate Kernel Class of the Network
Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks
On the Convergence Rate of Training Recurrent Neural Networks
Learning Two-layer Neural Networks with Symmetric Inputs
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 32 REFERENCES
Learning Polynomials with Neural Networks
Training a 3-node neural network is NP-complete
On the Computational Efficiency of Training Neural Networks
Learning Kernel-Based Halfspaces with the 0-1 Loss
Convex Neural Networks
Universal approximation bounds for superpositions of a sigmoidal function
  • A. Barron
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1993
ImageNet classification with deep convolutional neural networks
...
1
2
3
4
...