• Corpus ID: 239049700

RoMA: a Method for Neural Network Robustness Measurement and Assessment

  title={RoMA: a Method for Neural Network Robustness Measurement and Assessment},
  author={Natan Levy and Guy Katz},
Neural network models have become the leading solution for a large variety of tasks, such as classification, language processing, protein folding, and others. However, their reliability is heavily plagued by adversarial inputs: small input perturbations that cause the model to produce erroneous outputs. Adversarial inputs can occur naturally when the system’s environment behaves randomly, even in the absence of a malicious adversary, and are a severe cause for concern when attempting to deploy… 

Figures and Tables from this paper


Robustness of Neural Networks: A Probabilistic and Practical Approach
  • Ravi Mangal, A. Nori, A. Orso
  • Computer Science, Mathematics
    2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER)
  • 2019
This work proposes a novel notion of robustness: probabilistic robustness, which requires the neural network to be robust with at least (1 - ε) probability with respect to the input distribution, and presents an algorithm, based on abstract interpretation and importance sampling, for checking whether a neural network is probabilistically robust.
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach
This paper proposes a novel probabilistic framework PROVEN to PRObabilistically VErify Neural networks with statistical guarantees -- i.e., PROVEN certifies the probability that the classifier's top-1 prediction cannot be altered under any constrained $\ell_p$ norm perturbation to a given input.
Towards Evaluating the Robustness of Neural Networks
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Explaining and Harnessing Adversarial Examples
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Measuring Neural Net Robustness with Constraints
This work proposes metrics for measuring the robustness of a neural net and devise a novel algorithm for approximating these metrics based on an encoding of robustness as a linear program and generates more informative estimates of robusts metrics compared to estimates based on existing algorithms.
Safety Verification of Deep Neural Networks
A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.
A Statistical Approach to Assessing Neural Network Robustness
This work presents a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated under an input model, and demonstrates that this approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability.
Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty
A data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them and subsumes state-of-the-art reachability analysis and robustness certification is developed.
Verification of deep probabilistic models
A novel formulation of verification for deep probabilistic models that take in conditioning inputs and sample latent variables in the course of producing an output requires that the output of the model satisfies a linear constraint with high probability over the sampling of latent variables and for every choice of conditioning input to the model.
Formal Security Analysis of Neural Networks using Symbolic Intervals
This paper designs, implements, and evaluates a new direction for formally checking security properties of DNNs without using SMT solvers, and leverages interval arithmetic to compute rigorous bounds on the DNN outputs, which is easily parallelizable.