• Corpus ID: 244270929

Traversing the Local Polytopes of ReLU Neural Networks: A Unified Approach for Network Verification

@article{Xu2021TraversingTL,
  title={Traversing the Local Polytopes of ReLU Neural Networks: A Unified Approach for Network Verification},
  author={Shaojie Xu and Joel Vaughan and Jie Chen and Aijun Zhang and A. Sudjianto},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.08922}
}
Although neural networks (NNs) with ReLU activation functions have found success in a wide range of applications, their adoption in risk-sensitive settings has been limited by the concerns on robustness and interpretability. Previous works to examine robustness and to improve interpretability partially exploited the piecewise linear function form of ReLU NNs. In this paper, we explore the unique topological structure that ReLU NNs create in the input space, identifying the adjacency among the… 

Figures from this paper

Support Vectors and Gradient Dynamics for Implicit Bias in ReLU Networks
TLDR
This work examines the gradient flow dynamics in the parameter space when training single-neuron ReLU networks, and discovers implicit bias in terms of support vectors in ReLU Networks, which play a key role in why and how Re LU networks generalize well.

References

SHOWING 1-10 OF 48 REFERENCES
Reachability Analysis for Feed-Forward Neural Networks using Face Lattices
TLDR
This work proposes a parallelizable technique to compute exact reachable sets of a neural network to an input set, capable of constructing the complete input set given an output set, so that any input that leads to safety violation can be tracked.
A Unified View of Piecewise Linear Neural Network Verification
TLDR
A unified framework that encompasses previous methods is presented and the identification of new methods that combine the strengths of multiple existing approaches are identified, accomplishing a speedup of two orders of magnitude compared to the previous state of the art.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
TLDR
Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
Gradient descent optimizes over-parameterized deep ReLU networks
TLDR
The key idea of the proof is that Gaussian random initialization followed by gradient descent produces a sequence of iterates that stay inside a small perturbation region centered at the initial weights, in which the training loss function of the deep ReLU networks enjoys nice local curvature properties that ensure the global convergence of gradient descent.
Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
TLDR
An approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function and infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.
Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification
TLDR
A convenient LLM-based toolkit for interpretability, diagnostics, and simplification of a pre-trained deep ReLU network is developed, which utilizes the activation pattern and disentangles the complex network into an equivalent set of local linear models (LLMs).
On the Number of Linear Regions of Deep Neural Networks
We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have. Deep
Bounding and Counting Linear Regions of Deep Neural Networks
TLDR
The results indicate that a deep rectifier network can only have more linear regions than every shallow counterpart with same number of neurons if that number exceeds the dimension of the input.
A randomized gradient-free attack on ReLU networks
TLDR
A new attack scheme for the class of ReLU networks based on a direct optimization on the resulting linear regions is proposed, which is less susceptible to defences targeting their functional properties.
Understanding Deep Neural Networks with Rectified Linear Units
TLDR
The gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature, and a new lowerbound on the number of affine pieces is shown, larger than previous constructions in certain regimes of the network architecture.
...
1
2
3
4
5
...