JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

  title={JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System},
  author={Jiaming Zhang and Qiaomin Yi and Jitao Sang},
It has been observed that the unauthorized use of face recognition system raises privacy problems. Using adversarial perturbations provides one possible solution to address this issue. A critical issue to exploit adversarial perturbation against unauthorized face recognition system is that: The images uploaded to the web need to be processed by JPEG compression, which weakens the effectiveness of adversarial perturbation. Existing JPEG compression-resistant methods fails to achieve a balance… 
1 Citations

Figures and Tables from this paper

Enhancing the robustness of vision transformer defense against adversarial attacks based on squeeze-and-excitation module

The robustness of the ViT model in the face of adversarial attacks is investigated, and the robustness is enhanced by introducing the ResNet- SE module, which acts on the Attention module of theViT model.



Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

A translation-invariant attack method to generate more transferable adversarial examples against the defense models, which fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.

Improving Transferability of Adversarial Examples With Input Diversity

This work proposes to improve the transferability of adversarial examples by creating diverse input patterns by applying random transformations to the input images at each iteration, and shows that the proposed attack method can generate adversarialExamples that transfer much better to different networks than existing baselines.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks

NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models and demonstrate that the attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient- based attacks.

Admix: Enhancing the Transferability of Adversarial Attacks

A new input transformation based attack method called Admix that considers the input image and a set of images randomly sampled from other categories that could achieve significantly better transferability than existing input transformation methods under both single model setting and ensemble-model setting.

Towards compression-resistant privacy-preserving photo sharing on social networks

This paper gives the first attempt to investigate a generic compression-resistant scheme to protect photo privacy against DNNs in the social network scenario and proposes the Compression-Resistant Adversarial framework (ComReAdv) that can achieve adversarial examples robust to an unknown compression method.

Boosting Adversarial Attacks with Momentum

A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.

Transferable Adversarial Perturbations

It is shown that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black- box attacks and smooth regularization on adversarial perturbations enables transferring across models.

Delving into Transferable Adversarial Examples and Black-box Attacks

This work is the first to conduct an extensive study of the transferability over large models and a large scale dataset, and it is also theFirst to study the transferabilities of targeted adversarial examples with their target labels.

Backpropagating Linearly Improves Transferability of Adversarial Examples

LinBP is introduced, a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients and outperforms current state of thearts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs.