Output Range Analysis for Deep Feedforward Neural Networks

@inproceedings{Dutta2018OutputRA,
  title={Output Range Analysis for Deep Feedforward Neural Networks},
  author={Souradeep Dutta and Susmit Jha and Sriram Sankaranarayanan and Ashish Tiwari},
  booktitle={NFM},
  year={2018}
}
Given a neural network (NN) and a set of possible inputs to the network described by polyhedral constraints, we aim to compute a safe over-approximation of the set of possible output values. This operation is a fundamental primitive enabling the formal analysis of neural networks that are extensively used in a variety of machine learning tasks such as perception and control of autonomous systems. Increasingly, they are deployed in high-assurance applications, leading to a compelling use case… 
Acceleration techniques for optimization over trained neural network ensembles
TLDR
The results suggest that the optimization algorithm outperforms the adaption of an state-of-the-art approach in terms of computational time and optimality gaps.
Reluplex: a calculus for reasoning about deep neural networks
TLDR
A novel, scalable, and efficient technique based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks.
Reachability Analysis for Feed-Forward Neural Networks using Face Lattices
TLDR
This work proposes a parallelizable technique to compute exact reachable sets of a neural network to an input set, capable of constructing the complete input set given an output set, so that any input that leads to safety violation can be tracked.
Reachability analysis for neural feedback systems using regressive polynomial rule inference
TLDR
This work presents an approach to construct reachable set overapproximations for continuous-time dynamical systems controlled using neural network feedback systems by integrating a Taylor model-based flowpipe construction scheme for continuous differential equations with an approach that replaces the Neural network feedback law for a small subset of inputs by a polynomial mapping.
Optimizing over an ensemble of neural networks
TLDR
Exper-imental evaluations of the solution methods suggest that using ensembles of neural networks yields more stable and higher quality solutions, compared to single neural networks, and that the optimization algorithm outperforms a state-of-the-art approach in terms of computational time and optimality gaps.
Abstraction based Output Range Analysis for Neural Networks
TLDR
A novel abstraction technique is presented that constructs a simpler neural network with fewer neurons, albeit with interval weights called interval neural network (INN) which over-approximates the output range of the given neural network.
Stability and feasibility of neural network-based controllers via output range analysis
  • B. Karg, S. Lucia
  • Computer Science
    2020 59th IEEE Conference on Decision and Control (CDC)
  • 2020
TLDR
This paper introduces a parametric description of the neural network controller and uses a mixed-integer linear programming formulation to perform output range analysis of neural networks and proposes a novel method to modify a neural network Controller such that it performs optimally in the LQR sense in a region surrounding the equilibrium.
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
TLDR
A convex optimization framework to compute guaranteed upper bounds on the Lipschitz constant of DNNs both accurately and efficiently and is experimentally demonstrated to be the most accurate compared to those in the literature.
Computing Linear Restrictions of Neural Networks
TLDR
This paper shows how to exactly determine decision boundaries of an ACAS Xu neural network, providing significantly improved confidence in the results compared to prior work that sampled finitely many points in the input space and empirically falsify the core assumption behind a well-known hypothesis about adversarial examples.
Fixed-Point Code Synthesis For Neural Networks
TLDR
A new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only and can ensure that the new fixed- point neural network has the same behavior as the initial floating-point neural network.
...
...

References

SHOWING 1-10 OF 36 REFERENCES
Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
TLDR
An approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function and infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
TLDR
Results show that the novel, scalable, and efficient technique presented can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
Output Reachable Set Estimation and Verification for Multilayer Neural Networks
TLDR
An application to the safety verification for a robotic arm model with two joints is presented to show the effectiveness of the proposed approaches to output reachable set estimation and safety verification problems for multilayer perceptron (MLP) neural networks.
Piecewise Linear Neural Network verification: A comparative study
TLDR
Motivated by the need of accelerating progress in this very important area, a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework are investigated.
PLATO: Policy learning using adaptive trajectory optimization
TLDR
PLATO is proposed, a continuous, reset-free reinforcement learning algorithm that trains complex control policies with supervised learning, using model-predictive control (MPC) to generate the supervision, hence never in need of running a partially trained and potentially unsafe policy.
Reachable Set Estimation and Verification for a Class of Piecewise Linear Systems with Neural Network Controllers
TLDR
A layer-by-layer approach is developed for the output reachable set computation of ReLU neural networks, which is formulated in the form of a set of manipulations for a union of polytopes.
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Verifying Neural Networks with Mixed Integer Programming
TLDR
It is demonstrated that, for networks that are piecewise affine (for example, deep networks with ReLU and maxpool units), proving no adversarial example exists can be naturally formulated as solving a mixed integer program.
Reachable Set Estimation and Safety Verification for Piecewise Linear Systems with Neural Network Controllers
TLDR
The estimated output reachable set can be estimated iteratively for a given finite-time interval and the safety verification for piecewise linear systems with neural network controllers can be performed by checking the existence of intersections of unsafe regions and output reach set.
An Abstraction-Refinement Approach to Verification of Artificial Neural Networks
TLDR
A solution to verify their safety using abstractions to Boolean combinations of linear arithmetic constraints, and it is shown that whenever the abstract MLP is declared to be safe, the same holds for the concrete one.
...
...