Statistical Guarantees for Regularized Neural Networks

@article{Taheri2021StatisticalGF,
  title={Statistical Guarantees for Regularized Neural Networks},
  author={Mahsa Taheri and Fang Xie and Johannes Lederer},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2021},
  volume={142},
  pages={
          148-161
        }
}
Neural networks have become standard tools in the analysis of data, but they lack comprehensive mathematical theories. For example, there are very few statistical guarantees for learning neural networks from data, especially for classes of estimators that are used in practice or at least similar to such. In this paper, we develop a general statistical guarantee for estimators that consist of a least-squares term and a regularizer. We then exemplify this guarantee with ℓ1-regularization, showing… Expand
Risk Bounds for Robust Deep Learning
Hierarchical Adaptive Lasso: Learning Sparse Neural Networks with Shrinkage via Single Stage Training
Analytic function approximation by path norm regularized deep networks
Deep neural network approximation of analytic functions
Neural networks with superexpressive activations and integer weights
Regularization and Reparameterization Avoid Vanishing Gradients in Sigmoid-Type Networks
HALO: Learning to Prune Neural Networks with Shrinkage.
Layer Sparsity in Neural Networks

References

SHOWING 1-10 OF 54 REFERENCES
Approximation and Estimation for High-Dimensional Deep Learning Networks
L1-regularized Neural Networks are Improperly Learnable in Polynomial Time
Group sparse regularization for deep neural networks
Neural Network Learning - Theoretical Foundations
Implicit Regularization in Deep Learning
Complexity, Statistical Risk, and Metric Entropy of Deep Nets Using Total Path Variation
...
1
2
3
4
5
...