ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints

@article{Guesmi2022ROOMAM,
  title={ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints},
  author={Amira Guesmi and Khaled N. Khasawneh and Nael CSE and ECE Abu-Ghazaleh and Ihsen Alouani},
  journal={2022 International Joint Conference on Neural Networks (IJCNN)},
  year={2022},
  pages={1-10}
}
Advances in deep-learning have enabled a wide range of promising applications. However, these systems are vulnerable to adversarial attacks; adversarially crafted pertur-bations to their inputs could cause them to misclassify. Most state-of-the-art adversarial attack generation algorithms focus primarily on controlling the noise magnitude to make it undetectable. The execution time is a secondary consideration for these attacks and the underlying assumption is that there are no time constraints… 

References

SHOWING 1-10 OF 34 REFERENCES

Adversarial Examples: Attacks and Defenses for Deep Learning

TLDR
The methods for generating adversarial examples for DNNs are summarized, a taxonomy of these methods is proposed, and three major challenges in adversarialExamples are discussed and the potential solutions are discussed.

Real-Time Adversarial Attacks

TLDR
A real-time adversarial attack scheme for machine learning models with streaming inputs is proposed, where an attacker is only able to observe past data points and add perturbations to the remaining (unobserved) data points of the input.

Towards Deep Learning Models Resistant to Adversarial Attacks

TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Adversarial examples in the physical world

TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.

You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

TLDR
It is shown that adversarial training can be cast as a discrete time differential game, and the proposed algorithm YOPO (You Only Propagate Once) can achieve comparable defense accuracy with approximately 1/5 ~ 1/4 GPU time of the projected gradient descent (PGD) algorithm.

Enabling Fast and Universal Audio Adversarial Attack Using Generative Model

TLDR
This paper proposes fast audio adversarial perturbation generator (FAPG), which uses generative model to generate adversarialperturbations for the audio input in a single forward pass, thereby drastically improving the perturbations generation speed.

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

TLDR
New transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees are introduced.

AdvPulse: Universal, Synchronization-free, and Targeted Audio Adversarial Attacks via Subsecond Perturbations

TLDR
AdvPulse is proposed, a systematic approach to generate subsecond audio adversarial perturbations that achieves the capability to alter the recognition results of streaming audio inputs in a targeted and synchronization-free manner and exploits penalty-based universal adversarialperturbation generation algorithm and incorporates the varying time delay into the optimization process.

Towards Evaluating the Robustness of Neural Networks

TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.