BASAR:Black-box Attack on Skeletal Action Recognition
@article{Diao2021BASARBlackboxAO, title={BASAR:Black-box Attack on Skeletal Action Recognition}, author={Yunfeng Diao and Tianjia Shao and Yong-Liang Yang and Kun Zhou and He Wang}, journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2021}, pages={7593-7603} }
Skeletal motion plays a vital role in human activity recognition as either an independent data source or a complement [33]. The robustness of skeleton-based activity recognizers has been questioned recently [29], [50], which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker. However, this white-box requirement is overly restrictive in most scenarios and the attack is not truly threatening. In this paper, we show that…
6 Citations
Adversarial Visual Robustness by Causal Intervention
- Computer ScienceArXiv
- 2021
This paper provides a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning, where attackers are precisely exploiting the confounding effect, and proposes to use the instrumental variable that achieves intervention without the need for confoundinger observation.
Defending Black-box Skeleton-based Human Activity Classifiers
- Computer ScienceArXiv
- 2022
This paper investigates skeleton-based Human Activity Recognition, which is an important type of time-series data but under-explored in defense against attacks, and names the framework Bayesian Energy-based Adversarial Training or BEAT, which demonstrates surprising and universal effectiveness across a wide range of action classifiers and datasets, under various attacks.
Deep learning and RGB-D based human action, human-human and human-object interaction recognition: A survey
- Computer ScienceJ. Vis. Commun. Image Represent.
- 2022
Adversarial Bone Length Attack on Action Recognition
- Computer ScienceArXiv
- 2021
In this paper, it is shown that adversarial attacks can be performed on skeleton-based action recognition models, even in a significantly low-dimensional setting without any temporal manipulation, and an interesting phenomenon is discovered: in the low- dimensional setting, the adversarial training with the bone length attack improves the adversaries’ robustness but also improves the classi- tion accuracy on the original data.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: Survey II
- Computer ScienceArXiv
- 2021
A literature review of the contributions made by the computer vision community in adversarial attacks on deep learning until the advent of year 2018, which focuses on the advances in this area since 2018.
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
- Computer ScienceIEEE Access
- 2021
This review article thoroughly discusses the first generation attacks and comprehensively cover the modern attacks and their defenses appearing in the prestigious sources of computer vision and machine learning research.
References
SHOWING 1-10 OF 69 REFERENCES
Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This paper proposes a new method to attack action recognizers which rely on the 3D skeletal motion and shows that adversarial attack on3D skeletal motions, one type of time-series data, is significantly different from traditional adversarialattack problems.
SMART: Skeletal Motion Action Recognition aTtack
- Computer ScienceArXiv
- 2019
The proposed method, SMART, to attack action recognizers which rely on 3D skeletal motions involves an innovative perceptual loss which ensures the imperceptibility of the attack.
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers
- Computer ScienceArXiv
- 2018
This work develops a powerful untargeted adversarial attack for action recognition systems in both white-box and black-box settings, which can significantly degrade a model's performance with sparsely and imperceptibly perturbed examples.
Adversarial Attack on Skeleton-Based Human Action Recognition
- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2022
This work presents the first adversarial attack on skeleton-based action recognition with GCNs, and investigates the possibility of semantically imperceptible localized attacks with CIASA and reveals the imminent threat to the spatiotemporal deep learning tasks in general.
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
- Computer ScienceArXiv
- 2020
This paper first formulate generation of adversarial skeleton actions as a constrained optimization problem by representing or approximating the physiological and physical constraints with mathematical formulations, and proposes to solve it by optimizing its unconstrained dual problem using ADMM.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
- Computer ScienceIEEE Access
- 2018
This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision, reviewing the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This paper evaluates the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.
Heuristic Black-Box Adversarial Attacks on Video Recognition Models
- Computer ScienceAAAI
- 2020
A heuristic black-box adversarial attack model that generates adversarial perturbations only on the selected frames and regions is proposed that can significantly reduce the computation cost and lead to more than 28% reduction in query numbers for the untargeted attack on both datasets.
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
- Computer ScienceAISec@CCS
- 2017
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
- Computer ScienceAAAI
- 2019
Experimental results suggest that, by applying AutoZOOM to a state-of-the-art black-box attack (ZOO), a significant reduction in model queries can be achieved without sacrificing the attack success rate and the visual quality of the resulting adversarial examples.