Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing

@article{Puranik2021GeneralizedLR,
  title={Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing},
  author={Bhagyashree Puranik and Upamanyu Madhow and Ramtin Pedarsani},
  journal={IEEE Transactions on Signal Processing},
  year={2021},
  volume={70},
  pages={4124-4139}
}
Machine learning models are known to be susceptible to adversarial attacks, which can cause misclassification by introducing small but well designed perturbations. In this paper, we consider a classical hypothesis testing problem in order to develop fundamental insight into defending against such adversarial perturbations. We interpret an adversarial perturbation as a nuisance parameter, and propose a defense based on applying the generalized likelihood ratio test (GLRT) to the resulting… 

Figures from this paper

References

SHOWING 1-10 OF 35 REFERENCES

Adversarially Robust Classification Based on GLRT

This paper evaluates the GLRT approach for the special case of binary hypothesis testing in white Gaussian noise under ℓ∞ norm-bounded adversarial perturbations, a setting for which a minimax strategy optimizing for the worst-case attack is known.

Lower Bounds on Adversarial Robustness from Optimal Transport

While progress has been made in understanding the robustness of machine learning classifiers to test-time adversaries (evasion attacks), fundamental questions remain unresolved. In this paper, we use

Provable defenses against adversarial examples via the convex outer adversarial polytope

A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.

Provable tradeoffs in adversarially robust classification

The results reveal tradeoffs between standard and robust accuracy that grow when data is imbalanced, and develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry, which, to the authors' knowledge, have not yet been used in the area.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Scaling provable adversarial defenses

This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models.

Sparsity-based Defense Against Adversarial Attacks on Linear Classifiers

This is the first work to show that sparsity provides a theoretically rigorous framework for defense against adversarial attacks, and demonstrates the efficacy of a sparsifying front end via an ensemble averaged analysis, and experimental results for the MNIST handwritten digit database.

Adversarial examples from computational constraints

This work proves that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples and gives an exponential separation between classical learning and robust learning in the statistical query model.

Certified Defenses against Adversarial Examples

This work proposes a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value, providing an adaptive regularizer that encourages robustness against all attacks.

Semidefinite relaxations for certifying robustness to adversarial examples

A new semidefinite relaxation for certifying robustness that applies to arbitrary ReLU networks is proposed and it is shown that this proposed relaxation is tighter than previous relaxations and produces meaningful robustness guarantees on three different foreign networks whose training objectives are agnostic to the proposed relaxation.