Analytical bounds on the local Lipschitz constants of affine-ReLU functions
@article{Avant2020AnalyticalBO, title={Analytical bounds on the local Lipschitz constants of affine-ReLU functions}, author={Trevor Avant and Kristi A. Morgansen}, journal={ArXiv}, year={2020}, volume={abs/2008.06141} }
In this paper, we determine analytical bounds on the local Lipschitz constants of of affine functions composed with rectified linear units (ReLUs). Affine-ReLU functions represent a widely used layer in deep neural networks, due to the fact that convolution, fully-connected, and normalization functions are all affine, and are often followed by a ReLU activation function. Using an analytical approach, we mathematically determine upper bounds on the local Lipschitz constant of an affine-ReLU…
5 Citations
Certifying Incremental Quadratic Constraints for Neural Networks via Convex Optimization
- Computer Science, Mathematics
- 2020
A convex program is proposed, in the form of a Linear Matrix Inequality (LMI), to certify incremental quadratic constraints on the map of neural networks over a region of interest to certify stability and robustness of feedback systems involving neural networks.
Certifying Incremental Quadratic Constraints for Neural Networks via Convex Optimization
- Computer Science, MathematicsL4DC
- 2021
A convex program is proposed, in the form of a Linear Matrix Inequality (LMI), to certify incremental quadratic constraints on the map of neural networks over a region of interest to certify stability and robustness of feedback systems involving neural networks.
An Introduction to Neural Network Analysis via Semidefinite Programming
- Computer Science2021 60th IEEE Conference on Decision and Control (CDC)
- 2021
This overview presents a convex optimization framework for the analysis of neural networks to abstract hard-to-analyze components of a neural network with the formalism of quadratic constraints, which allows us to reason about various properties of Neural networks via semidefinite programming.
Slow Feature Extraction Algorithm Based on Visual Selection Consistency Continuity and Its Application
- Computer ScienceTraitement du Signal
- 2021
The proposed Slow Feature Extraction Algorithm Based on Visual selection consistency continuity has good performance in prediction and classification, and also shows good anti-noise capacity under limited noise conditions.
A Neurosymbolic Approach to the Verification of Temporal Logic Properties of Learning enabled Control Systems
- Computer ScienceArXiv
- 2023
This paper presents a model for the verification of Neural Network (NN) controllers for general STL specifications using a custom neural architecture where an STL formula is mapped into a feed-forward neural network with ReLU activation.
References
SHOWING 1-10 OF 18 REFERENCES
Exactly Computing the Local Lipschitz Constant of ReLU Networks
- Computer Science, MathematicsNeurIPS
- 2020
A novel analytic result is presented which relates gradient norms to Lipschitz constants for nondifferentiable functions and is applied on networks trained on synthetic datasets and MNIST, drawing observations about the tightness of competing LPschitz estimators and the effects of regularized training on LipsChitz constants.
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
- Computer ScienceNeurIPS
- 2019
A convex optimization framework to compute guaranteed upper bounds on the Lipschitz constant of DNNs both accurately and efficiently and is experimentally demonstrated to be the most accurate compared to those in the literature.
On Lipschitz Bounds of General Convolutional Neural Networks
- Computer Science, MathematicsIEEE Transactions on Information Theory
- 2020
This paper model a general framework for analyzing the Lipschitz bounds of CNN’s and proposes a linear program that estimates these bounds and establishes concentration inequalities for the output distribution with respect to a stationary random input signal.
Lipschitz constant estimation of Neural Networks via sparse polynomial optimization
- Computer ScienceICLR
- 2020
This work introduces LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks, and shows how to use structural properties of the network, such as sparsity, to significantly reduce the complexity of computation.
Lipschitz regularity of deep neural networks: analysis and efficient estimation
- Computer ScienceNeurIPS
- 2018
This paper provides AutoLip, the first generic algorithm for upper bounding the Lipschitz constant of any automatically differentiable function, and proposes an improved algorithm named SeqLip that takes advantage of the linear computation graph to split the computation per pair of consecutive layers.
Regularisation of neural networks by enforcing Lipschitz continuity
- Computer ScienceMach. Learn.
- 2021
The technique is used to formulate training a neural network with a bounded Lipschitz constant as a constrained optimisation problem that can be solved using projected stochastic gradient methods and shows that the performance of the resulting models exceeds that of models trained with other common regularisers.
Singular Values for ReLU Layers
- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2020
A comprehensive, singular-value-centric view of ReLU layers is given by studying how the activation function ReLU interacts with the linear component of the layer and what role this interaction plays in the success of the neural network in achieving its intended task.
Robust Large Margin Deep Neural Networks
- Computer ScienceIEEE Transactions on Signal Processing
- 2017
The analysis leads to the conclusion that a bounded spectral norm of the network's Jacobian matrix in the neighbourhood of the training samples is crucial for a deep neural network of arbitrary depth and width to generalize well.
Sensitivity and Generalization in Neural Networks: an Empirical Study
- Computer ScienceICLR
- 2018
It is found that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the norm of the input-output Jacobian of the network, and that it correlates well with generalization.
Spectrally-normalized margin bounds for neural networks
- Computer ScienceNIPS
- 2017
This bound is empirically investigated for a standard AlexNet network trained with SGD on the mnist and cifar10 datasets, with both original and random labels; the bound, the Lipschitz constants, and the excess risks are all in direct correlation, suggesting both that SGD selects predictors whose complexity scales with the difficulty of the learning task, and that the presented bound is sensitive to this complexity.