Cross-resolution face recognition adversarial attacks

@article{Massoli2020CrossresolutionFR,
  title={Cross-resolution face recognition adversarial attacks},
  author={Fabio Valerio Massoli and F. Falchi and Giuseppe Amato},
  journal={Pattern Recognit. Lett.},
  year={2020},
  volume={140},
  pages={222-229}
}
Abstract Face Recognition is among the best examples of computer vision problems where the supremacy of deep learning techniques compared to standard ones is undeniable. Unfortunately, it has been shown that they are vulnerable to adversarial examples - input images to which a human imperceptible perturbation is added to lead a learning model to output a wrong prediction. Moreover, in applications such as biometric systems and forensics, cross-resolution scenarios are easily met with a non… Expand
Biometrics: Trust, but Verify
TLDR
Insightful insights are provided into how the biometric community can address core biometric recognition systems design issues to better instill trust, fairness, and security for all. Expand
Super-resolving blurry face images with identity preservation
TLDR
An identity-preservation-based deep learning method is proposed for super-resolving blurry face images and results indicate that facial identity can serve as an effective prior to face image restoration. Expand
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
TLDR
This paper analyzes and explores key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. Expand

References

SHOWING 1-10 OF 34 REFERENCES
Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition
  • Yinpeng Dong, Hang Su, +4 authors Jun Zhu
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper evaluates the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. Expand
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks
TLDR
This paper attempts to unravel three aspects related to the robustness of DNNs for face recognition in terms of vulnerabilities to attacks inspired by commonly observed distortions in the real world, and presents several effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustnessof DNN-based face recognition. Expand
Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
TLDR
A novel class of attacks is defined: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual, and a systematic method to automatically generate such attacks is developed through printing a pair of eyeglass frames. Expand
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
TLDR
Elastic-net attacks to DNNs (EAD) feature $L_1$-oriented adversarial examples and include the state-of-the-art$L_2$ attack as a special case, suggesting novel insights on leveraging $L-1$ distortion in adversarial machine learning and security implications ofDNNs. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
TLDR
Fawkes is a system that allow individuals to inoculate themselves against unauthorized facial recognition models by helping users adding imperceptible pixel-level changes to their own photos before publishing them online. Expand
Face Verification and Recognition for Digital Forensics and Information Security
TLDR
An extensive evaluation of face recognition and verification approaches performed by the European COST Action MULTI-modal Imaging of FOREnsic SciEnce Evidence (MULTi-FORESEE), verifying the effectiveness of deep learning approaches in a specific scenario. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing
TLDR
An algorithm is proposed which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various "adversarial" targets. Expand
Improving Multi-scale Face Recognition Using VGGFace2
TLDR
The training campaign used to fine-tune a ResNet-50 architecture, with Squeeze-and-Excitation blocks, on the tasks of very low and mixed resolutions face recognition and the performance of the final model was tested on the IJB-B dataset. Expand
...
1
2
3
4
...