BASAR:Black-box Attack on Skeletal Action Recognition

@article{Diao2021BASARBlackboxAO,
  title={BASAR:Black-box Attack on Skeletal Action Recognition},
  author={Yunfeng Diao and Tianjia Shao and Yong-Liang Yang and Kun Zhou and He Wang},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={7593-7603}
}
  • Yunfeng Diao, Tianjia Shao, He Wang
  • Published 9 March 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Skeletal motion plays a vital role in human activity recognition as either an independent data source or a complement [33]. The robustness of skeleton-based activity recognizers has been questioned recently [29], [50], which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker. However, this white-box requirement is overly restrictive in most scenarios and the attack is not truly threatening. In this paper, we show that… 

Figures and Tables from this paper

Adversarial Visual Robustness by Causal Intervention
TLDR
This paper provides a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning, where attackers are precisely exploiting the confounding effect, and proposes to use the instrumental variable that achieves intervention without the need for confoundinger observation.
Defending Black-box Skeleton-based Human Activity Classifiers
TLDR
This paper investigates skeleton-based Human Activity Recognition, which is an important type of time-series data but under-explored in defense against attacks, and names the framework Bayesian Energy-based Adversarial Training or BEAT, which demonstrates surprising and universal effectiveness across a wide range of action classifiers and datasets, under various attacks.
Adversarial Bone Length Attack on Action Recognition
TLDR
In this paper, it is shown that adversarial attacks can be performed on skeleton-based action recognition models, even in a significantly low-dimensional setting without any temporal manipulation, and an interesting phenomenon is discovered: in the low- dimensional setting, the adversarial training with the bone length attack improves the adversaries’ robustness but also improves the classi- tion accuracy on the original data.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: Survey II
TLDR
A literature review of the contributions made by the computer vision community in adversarial attacks on deep learning until the advent of year 2018, which focuses on the advances in this area since 2018.
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
TLDR
This review article thoroughly discusses the first generation attacks and comprehensively cover the modern attacks and their defenses appearing in the prestigious sources of computer vision and machine learning research.

References

SHOWING 1-10 OF 69 REFERENCES
Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack
TLDR
This paper proposes a new method to attack action recognizers which rely on the 3D skeletal motion and shows that adversarial attack on3D skeletal motions, one type of time-series data, is significantly different from traditional adversarialattack problems.
SMART: Skeletal Motion Action Recognition aTtack
TLDR
The proposed method, SMART, to attack action recognizers which rely on 3D skeletal motions involves an innovative perceptual loss which ensures the imperceptibility of the attack.
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers
TLDR
This work develops a powerful untargeted adversarial attack for action recognition systems in both white-box and black-box settings, which can significantly degrade a model's performance with sparsely and imperceptibly perturbed examples.
Adversarial Attack on Skeleton-Based Human Action Recognition
TLDR
This work presents the first adversarial attack on skeleton-based action recognition with GCNs, and investigates the possibility of semantically imperceptible localized attacks with CIASA and reveals the imminent threat to the spatiotemporal deep learning tasks in general.
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
TLDR
This paper first formulate generation of adversarial skeleton actions as a constrained optimization problem by representing or approximating the physiological and physical constraints with mathematical formulations, and proposes to solve it by optimizing its unconstrained dual problem using ADMM.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
TLDR
This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision, reviewing the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition
  • Yinpeng Dong, Hang Su, Jun Zhu
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper evaluates the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.
Heuristic Black-Box Adversarial Attacks on Video Recognition Models
TLDR
A heuristic black-box adversarial attack model that generates adversarial perturbations only on the selected frames and regions is proposed that can significantly reduce the computation cost and lead to more than 28% reduction in query numbers for the untargeted attack on both datasets.
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
TLDR
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
TLDR
Experimental results suggest that, by applying AutoZOOM to a state-of-the-art black-box attack (ZOO), a significant reduction in model queries can be achieved without sacrificing the attack success rate and the visual quality of the resulting adversarial examples.
...
...