Corpus ID: 237532614

Harnessing Perceptual Adversarial Patches for Crowd Counting

@article{Liu2021HarnessingPA,
  title={Harnessing Perceptual Adversarial Patches for Crowd Counting},
  author={Shunchang Liu and Jiakai Wang and Aishan Liu and Yingwei Li and Yijie Gao and Xianglong Liu and Dacheng Tao},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.07986}
}
Crowd counting, which is significantly important for estimating the number of people in safety-critical scenes, has been shown to be vulnerable to adversarial examples in the physical world (e.g., adversarial patches). Though harmful, adversarial examples are also valuable for assessing and better understanding model robustness. However, existing adversarial example generation methods in crowd counting scenarios lack strong transferability among different black-box models. Motivated by the fact… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 38 REFERENCES
Perceptual-Sensitive GAN for Generating Adversarial Patches
TLDR
This paper proposes a perceptual-sensitive generative adversarial network (PS-GAN) that can simultaneously enhance the visual fidelity and the attacking ability for the adversarial patch, and treats the patch generation as a patch-to-patch translation via an adversarial process. Expand
Using Depth for Pixel-Wise Detection of Adversarial Attacks in Crowd Counting
TLDR
This paper investigates the effectiveness of existing attack strategies on crowd-counting networks, and introduces a simple yet effective pixel-wise detection mechanism that significantly outperforms heuristic and uncertainty-based strategies. Expand
Robust Physical-World Attacks on Deep Learning Visual Classification
TLDR
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Expand
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
TLDR
A translation-invariant attack method to generate more transferable adversarial examples against the defense models, which fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques. Expand
Adversarial Examples Improve Image Recognition
TLDR
This work proposes AdvProp, an enhanced adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting, and shows that AdvProp improves a wide range of models on various image recognition tasks and performs better when the models are bigger. Expand
Robust and Accurate Object Detection via Adversarial Learning
TLDR
This work augments the fine-tuning stage for object detectors by exploring adversarial examples, which can be viewed as a model-dependent data augmentation, and dynamically selects the stronger adversarial images sourced from a detector’s classification and localization branches and evolves with the detector to ensure the augmentation policy stays current and relevant. Expand
Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection
TLDR
The goal is to generate a patch that is able to successfully hide a person from a person detector, and this work is the first to attempt this kind of attack on targets with a high level of intra-class variety like persons. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
LaVAN: Localized and Visible Adversarial Noise
TLDR
It is shown that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
...
1
2
3
4
...