Regularisation of Neural Networks by Enforcing Lipschitz Continuity

@article{Gouk2018RegularisationON,
  title={Regularisation of Neural Networks by Enforcing Lipschitz Continuity},
  author={Henry Gouk and Eibe Frank and B. Pfahringer and M. Cree},
  journal={ArXiv},
  year={2018},
  volume={abs/1804.04368}
}
  • Henry Gouk, Eibe Frank, +1 author M. Cree
  • Published 2018
  • Computer Science, Mathematics
  • ArXiv
  • We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with respect to their inputs. To this end, we provide a simple technique for computing an upper bound to the Lipschitz constant of a feed forward neural network composed of commonly used layer types and demonstrate inaccuracies in previous work on this topic. Our technique is then used to formulate training a neural network with a bounded Lipschitz constant as a constrained optimisation problem that… CONTINUE READING
    109 Citations
    System Identification Through Lipschitz Regularized Deep Neural Networks
    • PDF
    MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes
    • 3
    • PDF
    Lipschitz regularized Deep Neural Networks converge and generalize
    • 26
    • PDF
    The coupling effect of Lipschitz regularization in deep neural networks
    • 3
    • PDF
    Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
    • 79
    • PDF
    On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory
    • PDF
    Sorting out Lipschitz function approximation
    • 72
    • PDF
    Bounding Singular Values of Convolution Layers
    • 4
    • PDF
    Stabilizing Invertible Neural Networks Using Mixture Models
    • 1
    • PDF

    References

    SHOWING 1-10 OF 45 REFERENCES
    Train faster, generalize better: Stability of stochastic gradient descent
    • 522
    • PDF
    On Lipschitz Bounds of General Convolutional Neural Networks
    • 11
    • PDF
    Lipschitz Properties for Deep Convolutional Networks
    • 27
    • PDF
    On the Convergence of Adam and Beyond
    • 972
    • Highly Influential
    • PDF
    Adam: A Method for Stochastic Optimization
    • 56,546
    • PDF
    Understanding the difficulty of training deep feedforward neural networks
    • 9,224
    • PDF