Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines

@article{Kehoe2021DefenceAA,
  title={Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines},
  author={Aidan Kehoe and Peter Wittek and Yanbo Xue and Alejandro Pozas-Kerstjens},
  journal={Mach. Learn. Sci. Technol.},
  year={2021},
  volume={2},
  pages={45006}
}
We provide a robust defence to adversarial attacks on discriminative algorithms. Neural networks are naturally vulnerable to small, tailored perturbations in the input data that lead to wrong predictions. On the contrary, generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations. We use Boltzmann machines for discrimination purposes as attack-resistant classifiers, and compare them against standard state-of-the-art… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 58 REFERENCES

Towards Deep Learning Models Resistant to Adversarial Attacks

TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Optimal provable robustness of quantum classification via quantum hypothesis testing

TLDR
From the observation that measurements involved in quantum classification algorithms are naturally probabilistic, a fundamental link is uncovered between binary quantum hypothesis testing and provably robust quantum classification that leads to a tight robustness condition that puts constraints on the amount of noise a classifier can tolerate, independent of whether the noise source is natural or adversarial.

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

TLDR
The Boundary Attack is introduced, a decision-based attack that starts from a large adversarial perturbations and then seeks to reduce the perturbation while staying adversarial and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.

Towards Evaluating the Robustness of Neural Networks

TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

Generating Natural Adversarial Examples

TLDR
This paper proposes a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks.

Ensemble Adversarial Training: Attacks and Defenses

TLDR
This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step.

How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models

TLDR
Gaussian Processes is used to investigate adversarial examples in the framework of Bayesian inference and finds deviating levels of uncertainty reflect the perturbation introduced to benign samples by state-of-the-art attacks, including novel white-box attacks on Gaussian Processe.

Adversarial Examples: Attacks and Defenses for Deep Learning

TLDR
The methods for generating adversarial examples for DNNs are summarized, a taxonomy of these methods is proposed, and three major challenges in adversarialExamples are discussed and the potential solutions are discussed.

Mitigating adversarial effects through randomization

TLDR
This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner.

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

TLDR
New transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees are introduced.
...