JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System
@article{Zhang2022JPEGCL, title={JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System}, author={Jiaming Zhang and Qiaomin Yi and Jitao Sang}, journal={ArXiv}, year={2022}, volume={abs/2206.09410} }
It has been observed that the unauthorized use of face recognition system raises privacy problems. Using adversarial perturbations provides one possible solution to address this issue. A critical issue to exploit adversarial perturbation against unauthorized face recognition system is that: The images uploaded to the web need to be processed by JPEG compression, which weakens the effectiveness of adversarial perturbation. Existing JPEG compression-resistant methods fails to achieve a balance…
One Citation
Enhancing the robustness of vision transformer defense against adversarial attacks based on squeeze-and-excitation module
- Computer SciencePeerJ Comput. Sci.
- 2023
The robustness of the ViT model in the face of adversarial attacks is investigated, and the robustness is enhanced by introducing the ResNet- SE module, which acts on the Attention module of theViT model.
References
SHOWING 1-10 OF 34 REFERENCES
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
A translation-invariant attack method to generate more transferable adversarial examples against the defense models, which fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.
Improving Transferability of Adversarial Examples With Input Diversity
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This work proposes to improve the transferability of adversarial examples by creating diverse input patterns by applying random transformations to the input images at each iteration, and shows that the proposed attack method can generate adversarialExamples that transfer much better to different networks than existing baselines.
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer ScienceICLR
- 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
- Computer ScienceICLR
- 2020
NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models and demonstrate that the attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient- based attacks.
Admix: Enhancing the Transferability of Adversarial Attacks
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
A new input transformation based attack method called Admix that considers the input image and a set of images randomly sampled from other categories that could achieve significantly better transferability than existing input transformation methods under both single model setting and ensemble-model setting.
Towards compression-resistant privacy-preserving photo sharing on social networks
- Computer ScienceMobiHoc
- 2020
This paper gives the first attempt to investigate a generic compression-resistant scheme to protect photo privacy against DNNs in the social network scenario and proposes the Compression-Resistant Adversarial framework (ComReAdv) that can achieve adversarial examples robust to an unknown compression method.
Boosting Adversarial Attacks with Momentum
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
Transferable Adversarial Perturbations
- Computer ScienceECCV
- 2018
It is shown that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black- box attacks and smooth regularization on adversarial perturbations enables transferring across models.
Delving into Transferable Adversarial Examples and Black-box Attacks
- Computer ScienceICLR
- 2017
This work is the first to conduct an extensive study of the transferability over large models and a large scale dataset, and it is also theFirst to study the transferabilities of targeted adversarial examples with their target labels.
Backpropagating Linearly Improves Transferability of Adversarial Examples
- Computer ScienceNeurIPS
- 2020
LinBP is introduced, a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients and outperforms current state of thearts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs.