Interactive Analysis of CNN Robustness

@article{Sietzen2021InteractiveAO,
  title={Interactive Analysis of CNN Robustness},
  author={Stefan Sietzen and Mathias Lechner and Judy Borowski and Ramin M. Hasani and Manuela Waldner},
  journal={Computer Graphics Forum},
  year={2021},
  volume={40}
}
While convolutional neural networks (CNNs) have found wide adoption as state‐of‐the‐art models for image‐related tasks, their predictions are often highly sensitive to small input perturbations, which the human vision is robust against. This paper presents Perturber, a web‐based application that allows users to instantaneously explore how CNN activations and predictions evolve when a 3D input scene is interactively perturbed. Perturber offers a large variety of scene modifications, such as… 
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
TLDR
An objective psychophysical task is proposed to quantify the benefit of unit-level interpretability methods for humans, and it is found no evidence that a widely-used feature visualization method provides humans with better “causal understanding” of unit activations than simple alternative visualizations.

References

SHOWING 1-10 OF 62 REFERENCES
Interpreting Adversarially Trained Convolutional Neural Networks
TLDR
Surprisingly, it is found that adversarial training alleviates the texture bias of standard CNNs when trained on object recognition tasks, and helps CNNs learn a more shape-biased representation.
Adversarial-Playground: A visualization suite showing how adversarial examples fool deep learning
TLDR
A web-based visualization tool, Adversarial-Playground, to demonstrate the efficacy of common adversarial methods against a convolutional neural network (CNN) system and a faster variant of JSMA evasion algorithm, empirically performed twice as fast as JSMA and yet maintains a comparable evasion rate.
Analyzing the Noise Robustness of Deep Neural Networks
TLDR
A visual analytics approach to explain the primary cause of the wrong predictions introduced by adversarial examples and formulate the datapath extraction as a subset selection problem and approximately solve it based on back-propagation.
Robust Physical-World Attacks on Deep Learning Visual Classification
TLDR
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Understanding Deep Features with Computer-Generated Imagery
TLDR
This work introduces an approach for analyzing the variation of features generated by convolutional neural networks trained on large image datasets with respect to scene factors that occur in natural images, and quantifies their relative importance in the CNN responses and visualize them using principal component analysis.
VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection
TLDR
This work proposes a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications.
Exploring the Landscape of Spatial Robustness
TLDR
This work thoroughly investigate the vulnerability of neural network--based classifiers to rotations and translations and finds that, in contrast to the p-norm case, first-order methods cannot reliably find worst-case perturbations.
Understanding Neural Networks Through Deep Visualization
TLDR
This work introduces several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations of convolutional neural networks.
CNNComparator: Comparative Analytics of Convolutional Neural Networks
TLDR
A visual analytics approach to compare two different snapshots of a trained CNN model taken after different numbers of epochs so as to provide some insight into the design or the training of a better CNN model.
CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization
TLDR
CNN Explainer is an interactive visualization tool designed for non-experts to learn and examine convolutional neural networks, a foundational deep learning model architecture, and is engaging and enjoyable to use.
...
...