• Corpus ID: 244708999

The Geometry of Adversarial Training in Binary Classification

@article{Bungert2021TheGO,
title={The Geometry of Adversarial Training in Binary Classification},
author={Leon Bungert and Nicol{\'a}s Garc{\'i}a Trillos and Ryan W. Murray},
journal={ArXiv},
year={2021},
volume={abs/2111.13613}
}
• Published 26 November 2021
• Mathematics, Computer Science
• ArXiv
We establish an equivalence between a family of adversarial training problems for non-parametric binary classification and a family of regularized risk minimization problems where the regularizer is a nonlocal perimeter functional. The resulting regularized risk minimization problems admit exact convex relaxations of the type L+ (nonlocal) TV, a form frequently studied in image analysis and graph-based learning. A rich geometric structure is revealed by this reformulation which in turn allows…
3 Citations

Figures from this paper

Adversarial Classification: Necessary conditions and geometric flows
• Computer Science, Mathematics
ArXiv
• 2020
A version of adversarial classification where an adversary is empowered to corrupt data inputs up to some distance $\varepsilon$ is studied, using tools from variational analysis to derive a geometric evolution equation which can be used to track the change in classification boundaries as $\vARPSilon$ varies.
Probabilistically Robust Learning: Balancing Average- and Worst-case Performance
• Computer Science
ArXiv
• 2022
A framework called probabilistic robustness is proposed that bridges the gap between the accurate, yet brittle average case and the robust, yet conservative worst case by enforcing robustness to most rather than to all perturbations.
Eikonal depth: an optimal control approach to statistical depths
• Mathematics
ArXiv
• 2022
A new type of globally defined statistical depth is proposed, based upon control theory and eikonal equations, which measures the smallest amount of probability density that has to be passed through in a path to points outside the support of the distribution: for example spatial infinity.

References

SHOWING 1-10 OF 65 REFERENCES
Robustness via Curvature Regularization, and Vice Versa
• Computer Science
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
It is shown in particular that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs, leading to a drastically more "linear" behaviour of the network.
Adversarial Classification: Necessary conditions and geometric flows
• Computer Science, Mathematics
ArXiv
• 2020
A version of adversarial classification where an adversary is empowered to corrupt data inputs up to some distance $\varepsilon$ is studied, using tools from variational analysis to derive a geometric evolution equation which can be used to track the change in classification boundaries as $\vARPSilon$ varies.
The Many Faces of Adversarial Risk
The technical tools derive from optimal transport, robust statistics, functional analysis, and game theory, and generalizing Strassen’s theorem to the unbalanced optimal transport setting with applications to adversarial classification with unequal priors and proving the existence of a pure Nash equilibrium in the two-player game between the adversary and the algorithm.
Lower Bounds on Adversarial Robustness from Optimal Transport
• Computer Science
NeurIPS
• 2019
While progress has been made in understanding the robustness of machine learning classifiers to test-time adversaries (evasion attacks), fundamental questions remain unresolved. In this paper, we use
Improved robustness to adversarial examples using Lipschitz regularization of the loss
• Computer Science, Mathematics
ArXiv
• 2018
This work augments AT with worst case adversarial training (WCAT) which improves adversarial robustness by 11% over the current state-of-the-art result in the $\ell_2$ norm on CIFAR-10, and obtains verifiable average case and worst case robustness guarantees.
• Computer Science
2015 IEEE International Conference on Data Mining
• 2015
A family of gradient regularization methods that effectively penalize the gradient of loss function w.r.t. inputs are developed and achieved the best accuracy on MNIST data (without data augmentation) and competitive performance on CIFAR-10 data.
Towards Deep Learning Models Resistant to Adversarial Attacks
• Computer Science
ICLR
• 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
• Computer Science
AAAI
• 2018
It is demonstrated that regularizing input gradients makes them more naturally interpretable as rationales for model predictions, and also exhibits robustness to transferred adversarial examples generated to fool all of the other models.
Improving Gradient Regularization using Complex-Valued Neural Networks
• Computer Science
ICML
• 2021
Experimental results show that the performance of gradient regularized CVNN surpasses that of real-valued neural networks with comparable storage and computational complexity and that the properties of the CVNN parameter derivatives resist decrease of performance on the standard objective that is caused by competition with the gradient regularization objective.
CLIP: Cheap Lipschitz Training of Neural Networks
• Computer Science
SSVM
• 2021
A variational regularization method named CLIP is investigated for controlling the Lipschitz constant of a neural network, which can easily be integrated into the training procedure and compared with a weight regularization approach.