Regularisation of Neural Networks by Enforcing Lipschitz Continuity
@article{Gouk2018RegularisationON, title={Regularisation of Neural Networks by Enforcing Lipschitz Continuity}, author={Henry Gouk and Eibe Frank and B. Pfahringer and M. Cree}, journal={ArXiv}, year={2018}, volume={abs/1804.04368} }
We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with respect to their inputs. To this end, we provide a simple technique for computing an upper bound to the Lipschitz constant of a feed forward neural network composed of commonly used layer types and demonstrate inaccuracies in previous work on this topic. Our technique is then used to formulate training a neural network with a bounded Lipschitz constant as a constrained optimisation problem that… CONTINUE READING
Figures, Tables, and Topics from this paper
109 Citations
System Identification Through Lipschitz Regularized Deep Neural Networks
- Computer Science, Mathematics
- ArXiv
- 2020
- PDF
MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes
- Computer Science, Mathematics
- ECML/PKDD
- 2018
- 3
- PDF
Lipschitz regularized Deep Neural Networks converge and generalize
- Computer Science, Mathematics
- ArXiv
- 2018
- 26
- PDF
The coupling effect of Lipschitz regularization in deep neural networks
- Computer Science, Mathematics
- ArXiv
- 2019
- 3
- PDF
Fast & Accurate Method for Bounding the Singular Values of Convolutional Layers with Application to Lipschitz Regularization
- Computer Science
- ArXiv
- 2020
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
- Computer Science, Mathematics
- NeurIPS
- 2019
- 79
- PDF
On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory
- Computer Science, Mathematics
- 2020
- PDF
Stabilizing Invertible Neural Networks Using Mixture Models
- Computer Science, Mathematics
- ArXiv
- 2020
- 1
- PDF
References
SHOWING 1-10 OF 45 REFERENCES
MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes
- Computer Science, Mathematics
- ECML/PKDD
- 2018
- 3
- PDF
Train faster, generalize better: Stability of stochastic gradient descent
- Computer Science, Mathematics
- ICML
- 2016
- 522
- PDF
On Lipschitz Bounds of General Convolutional Neural Networks
- Mathematics, Computer Science
- IEEE Transactions on Information Theory
- 2020
- 11
- PDF
Understanding the difficulty of training deep feedforward neural networks
- Computer Science, Mathematics
- AISTATS
- 2010
- 9,224
- PDF
Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
- Computer Science, Mathematics
- NIPS
- 2016
- 935
- PDF
Parseval Networks: Improving Robustness to Adversarial Examples
- Mathematics, Computer Science
- ICML
- 2017
- 417
- PDF
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Computer Science
- ICML
- 2015
- 20,767
- PDF