RoBIC: A Benchmark Suite For Assessing Classifiers Robustness
@article{Maho2021RoBICAB, title={RoBIC: A Benchmark Suite For Assessing Classifiers Robustness}, author={Thibault Maho and Beno{\^i}t Bonnet and Teddy Furon and Erwan Le Merrer}, journal={2021 IEEE International Conference on Image Processing (ICIP)}, year={2021}, pages={3612-3616} }
Many defenses have emerged with the development of adversarial attacks. Models must be objectively evaluated accordingly. This paper systematically tackles this concern by proposing a new parameter-free benchmark we coin ROBIC. ROBIC fairly evaluates the robustness of image classifiers using a new half-distortion measure. It gauges the robustness of the network against white and black box attacks, independently of its accuracy. ROBIC is faster than the other available benchmarks. We present the…
One Citation
RobustBench: a standardized adversarial robustness benchmark
- Computer ScienceNeurIPS Datasets and Benchmarks
- 2021
This work evaluates robustness of models for their benchmark with AutoAttack, an ensemble of white- and black-box attacks which was recently shown in a large-scale study to improve almost all robustness evaluations compared to the original publications.
References
SHOWING 1-10 OF 37 REFERENCES
Benchmarking Adversarial Robustness on Image Classification
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
A comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks is established and several important findings are drawn that can provide insights for future research.
RobustBench: a standardized adversarial robustness benchmark
- Computer ScienceNeurIPS Datasets and Benchmarks
- 2021
This work evaluates robustness of models for their benchmark with AutoAttack, an ensemble of white- and black-box attacks which was recently shown in a large-scale study to improve almost all robustness evaluations compared to the original publications.
SurFree: a fast surrogate-free black-box attack
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
SurFree is presented, a geometrical approach that achieves a drastic reduction in the amount of queries in the hardest setup: black box decision-based attacks (only the top-1 label is available) and exhibits a faster distortion decay under low query amounts, while remaining competitive at higher query budgets.
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This work proposes an effective iterative algorithm to generate query-efficient black-box perturbations with small p norms and theoretically shows that the algorithm actually converges to the minimal perturbation when the curvature of the decision boundary is bounded.
Towards Evaluating the Robustness of Neural Networks
- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Walking on the Edge: Fast, Low-Distortion Adversarial Examples
- Computer ScienceIEEE Transactions on Information Forensics and Security
- 2021
This work argues that speed is important as well, especially when considering that fast attacks are required by adversarial training, and introduces a new attack called boundary projection (BP) that improves upon existing methods by a large margin.
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer ScienceICLR
- 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
QEBA: Query-Efficient Boundary-Based Blackbox Attack
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This paper proposes a Query-Efficient Boundary-based blackbox Attack (QEBA) based only on model’s final prediction labels, theoretically shows why previous boundary-based attack with gradient estimation on the whole gradient space is not efficient in terms of query numbers, and provides optimality analysis for dimension reduction-based gradient estimation.
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust.
What if Adversarial Samples were Digital Images?
- Computer ScienceIH&MMSec
- 2020
A new quantization mechanism is presented which preserves the adversariality of the perturbation and its application outcomes to a new look at the lessons learnt in adversarial sampling.