• Corpus ID: 232092897

# A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness

@article{Abernethy2021AMB,
title={A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness},
author={Jacob D. Abernethy and Pranjal Awasthi and Satyen Kale},
journal={ArXiv},
year={2021},
volume={abs/2103.01276}
}
• Published 1 March 2021
• Computer Science
• ArXiv
Alongside the well-publicized accomplishments of deep neural networks there has emerged an apparent bug in their success on tasks such as object recognition: with deep models trained using vanilla methods, input images can be slightly corrupted in order to modify output predictions, even when these corruptions are practically invisible. This apparent lack of robustness has led researchers to propose methods that can help to prevent an adversary from having such capabilities. The state-of-the…
2 Citations

## Figures and Tables from this paper

Online Agnostic Multiclass Boosting
• Computer Science
ArXiv
• 2022
This work gives the first boosting algorithm for online agnostic mutli-class classiﬁcation boosting and enables the construction of algorithms for statistical agnostic, online realizable, and statistical realizable multiclass boosting.
Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness
• Computer Science
ArXiv
• 2022
An oracle-efﬁcient algorithm for boosting the adversarial robustness of barely robust learners and reveals a qualitative and quantitative equivalence between two seemingly unrelated problems: strongly robust learning and barely robust learning.

## References

SHOWING 1-10 OF 43 REFERENCES
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
• Computer Science
NeurIPS
• 2019
It is demonstrated through extensive experimentation that this method consistently outperforms all existing provably $\ell-2$-robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable $\ell_ 2$-defenses.
Certified Adversarial Robustness via Randomized Smoothing
• Computer Science
ICML
• 2019
Strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification on smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies.
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
• Computer Science
NeurIPS
• 2019
This paper shows how to efficiently calculate and optimize an upper bound on the robust loss, which leads to state-of-the-art robust test error for boosted trees on MNIST (12.5% for $\epsilon_\infty=0.3$), FMNIST, and CIFAR-10 (74.7%).
Towards Deep Learning Models Resistant to Adversarial Attacks
• Computer Science
ICLR
• 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Game theory, on-line prediction and boosting
• Computer Science
COLT '96
• 1996
An algorithm for learning to play repeated games based on the on-line prediction methods of Littlestone and Warmuth is described, which yields a simple proof of von Neumann’s famous minmax theorem, as well as a provable method of approximately solving a game.
Averaging Weights Leads to Wider Optima and Better Generalization
• Computer Science
UAI
• 2018
It is shown that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training, and Stochastic Weight Averaging (SWA) is extremely easy to implement, improves generalization, and has almost no computational overhead.
• Computer Science
ICLR
• 2017
This paper proposes a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost by training a single neural network, converging to several local minima along its optimization path and saving the model parameters.
Evasion Attacks against Machine Learning at Test Time
• Computer Science
ECML/PKDD
• 2013
This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Gradient-based learning applied to document recognition
• Computer Science
Proc. IEEE
• 1998
This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.
Reducing Adversarially Robust Learning to Non-Robust PAC Learning
• Computer Science
NeurIPS
• 2020
A reduction is given that can robustly learn any hypothesis class $C$ using any non-robust learner $A$ for $\mathcal{C}$ and depends logarithmically on the number of allowed adversarial perturbations per example.