DeepXplore: Automated Whitebox Testing of Deep Learning Systems
@article{Pei2019DeepXploreAW, title={DeepXplore: Automated Whitebox Testing of Deep Learning Systems}, author={Kexin Pei and Yinzhi Cao and Junfeng Yang and Suman Sekhar Jana}, journal={GetMobile Mob. Comput. Commun.}, year={2019}, volume={22}, pages={36-38} }
Over the past few years, Deep Learning (DL) has made tremendous progress, achieving or surpassing human-level performance for a diverse set of tasks, including image classification, speech recognition, and playing games like Go. These advances have led to widespread adoption and deployment of DL in security- and safety-critical systems, such as selfdriving cars, malware detection, and aircraft collision avoidance systems.
14 Citations
Enhancing ML Robustness Using Physical-World Constraints
- Computer ScienceArXiv
- 2019
The results on the KITTI and GTSRB datasets show that the robustness against physical attacks at minimal harm to accuracy is improved, and a hierarchical classification paradigm is designed that enforces invariants that limit the attacker's action space.
Test Case Generation for Convolutional Neural Network
- Computer Science
- 2020
This paper presents a test image generation approach for Convolutional Neural Network that is widely used for image recognition and is the first attempt to improve the quality of images using the generative modeling approach.
MuNN: Mutation Analysis of Neural Networks
- Computer Science2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C)
- 2018
MuNN is proposed, a mutation analysis method for (deep) neural networks, inspired by the success of mutation analysis in conventional software testing, and shows that mutation analysis of neural networks has strong domain characteristics, and that the mutation effects are gradually weakened with the deepening of neurons.
Bayes-Probe: Distribution-Guided Sampling for Prediction Level Sets
- Computer ScienceArXiv
- 2020
BAYES-PROBE is introduced, a model inspection method for analyzing neural networks by generating distribution-conforming examples of known prediction confidence, which can be used to synthesize ambivalent predictions, uncover in-distribution adversarial examples, and understand novel-class extrapolation and domain adaptation behaviours.
Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples
- Computer ScienceNeurIPS
- 2018
This work proposes a novel adversarial sample detection technique for face recognition models, based on interpretability, that features a novel bi-directional correspondence inference between attributes and internal neurons to identify neurons critical for individual attributes.
Testing Neural Network Classifiers Based on Metamorphic Relations
- Computer Science2019 6th International Conference on Dependable Systems and Their Applications (DSA)
- 2020
This paper proposes a testing method for neural network classifiers based on metamorphic relations, which designs metabolic relations to transform the original data set into derivative data sets and checks whether the output conforms to the metamorphIC relations.
Scenic: Language-Based Scene Generation
- Computer ScienceArXiv
- 2018
This paper designs a domain-specific language, Scenic, for describing "scenarios" that are distributions over scenes, a probabilistic programming language, that allows assigning distributions to features of the scene, as well as declaratively imposing hard and soft constraints over the scene.
How to Learn a Model Checker
- Computer ScienceArXiv
- 2017
We show how machine-learning techniques, particularly neural networks, offer a very effective and highly efficient solution to the approximate model-checking problem for continuous and hybrid…
ShapeFlow: Dynamic Shape Interpreter for TensorFlow
- Computer ScienceArXiv
- 2020
ShapeFlow detects shape incompatibility errors highly accurately -- with no false positives and a single false negative -- and highly efficiently -- with an average speed-up of 499X and 24X for the first and second baseline, respectively.
Correctness Verification of Neural Networks
- Computer ScienceArXiv
- 2019
We present the first verification that a neural network produces a correct output within a specified tolerance for every input of interest. We define correctness relative to a specification which…
References
SHOWING 1-9 OF 9 REFERENCES
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
- Computer Science2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015
This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision.
Intriguing properties of neural networks
- Computer ScienceICLR
- 2014
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Explaining and Harnessing Adversarial Examples
- Computer ScienceICLR
- 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Differential Testing for Software
- Computer ScienceDigit. Tech. J.
- 1998
Quality is not a question of correctness, but rather of how many bugs are fixed and how few are introduced in the ongoing development process, if the bug count is increasing, the software is deteriorating.
Inside Waymo's secret world for training self-driving cars
- 2017
A Google self-driving car caused a crash for the first time
- 2016
Report on autonomous mode disengagements for waymo self-driving vehicles in california
- 2016
Understanding the fatal Tesla accident on Autopilot and the NHTSA probe
- 2016
ImageNet: Crowdsourcing, benchmarking & other cool things.
- CMU VASC Seminar. Vol
- 2010