LOTS about attacking deep features

@article{Rozsa2017LOTSAA,
  title={LOTS about attacking deep features},
  author={Andras Rozsa and Manuel G{\"u}nther and Terrance E. Boult},
  journal={2017 IEEE International Joint Conference on Biometrics (IJCB)},
  year={2017},
  pages={168-176}
}
Deep neural networks provide state-of-the-art performance on various tasks and are, therefore, widely used in real world applications. DNNs are becoming frequently utilized in biometrics for extracting deep features, which can be used in recognition systems for enrolling and recognizing new individuals. It was revealed that deep neural networks suffer from a fundamental problem, namely, they can unexpectedly misclassify examples formed by slightly perturbing correctly recognized inputs. Various… 

Figures and Tables from this paper

Towards Transferable Adversarial Attack Against Deep Face Recognition
TLDR
This work proposes DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models and obtain ensemble-like effects in face recognition, and shows that the proposed method can significantly enhance the transferability of existing attack methods.
ReFace: Real-time Adversarial Attacks on Face Recognition Systems
TLDR
ReFace is proposed, a real-time, highly-transferable attack on face recognition models based on Adversarial Transformation Networks (ATNs) that closes the gap between gradient-based attacks like PGD on large face recognition datasets and a new architecture for ATNs that closes this gap.
Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition
TLDR
This paper attempts to unravel three aspects related to the robustness of DNNs for face recognition in terms of vulnerabilities to attacks, detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and making corrections to the processing pipeline to alleviate the problem.
Adversarial Attack on Deep Learning-Based Splice Localization
TLDR
This work demonstrates on three non end-to-end deep learning-based splice localization tools that hiding manipulations of images is feasible via adversarial attacks and finds that the formed ad- versarialperturbations can be transferable among them regarding the deterioration of their localization performance.
Threat of Adversarial Attacks on Face Recognition: A Comprehensive Survey
TLDR
This article presents a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them, and proposes a taxonomy of existing attack and defense strategies according to different criteria.
Improving Adversarial Attacks on Face Recognition Using a Modified Image Translation Model
TLDR
BiasGAN can be inserted as a preprocesser prior to conducting adversarial attacks on face recognition models to get better attack performance and demonstrate that the method can make improvements at different perturbation levels and achieve even better performance in challenges in a low perturbations range.
Challenging the Adversarial Robustness of DNNs Based on Error-Correcting Output Codes
TLDR
An in-depth investigation of the adversarial robustness achieved by the ECOC approach is carried out, proposing a new adversarial attack specifically designed for multilabel classification architectures, like theECOC-based one, and applying two existing attacks.
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations
TLDR
A new white-box backdoor attack that exploits a vulnerability of convolutional neural networks (CNNs) is presented that is able to significantly increase the chance that inputs they supply will be falsely accepted by a CNN while simultaneously preserving the error rates for legitimate enrolled classes.
Adversarial Robustness for Face Recognition: How to Introduce Ensemble Diversity among Feature Extractors?
TLDR
This paper significantly enhances the robustness against AXs under the white box and black box settings while slightly increasing the accuracy, and compared the method with adversarial training.
GlassMasq: Adversarial Examples Masquerading in Face Identification Systems with Feature Extractor
TLDR
To obtain adversarial examples with high confidence and small perturbation, a condition is introduced which adversarialExamples against the face identification systems should satisfy, then a new method is introduced called GlassMasq to create adversarial Examples based on the condition.
...
...

References

SHOWING 1-10 OF 35 REFERENCES
Adversarial Diversity and Hard Positive Generation
TLDR
A new psychometric perceptual adversarial similarity score (PASS) measure for quantifying adversarial images, the notion of hard positive generation is introduced, and a novel hot/cold approach for adversarial example generation is presented, which provides multiple possible adversarial perturbations for every single image.
Adversarial Machine Learning at Scale
TLDR
This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples.
Adversarial Manipulation of Deep Representations
TLDR
While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Deep Learning Face Representation from Predicting 10,000 Classes
TLDR
It is argued that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set.
L2-constrained Softmax Loss for Discriminative Face Verification
TLDR
This paper adds an L2-constraint to the feature descriptors which restricts them to lie on a hypersphere of a fixed radius and shows that integrating this simple step in the training pipeline significantly boosts the performance of face verification.
FaceNet: A unified embedding for face recognition and clustering
TLDR
A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Deep Learning Face Representation by Joint Identification-Verification
TLDR
This paper shows that the face identification-verification task can be well solved with deep learning and using both face identification and verification signals as supervision, and the error rate has been significantly reduced.
CNN Features Off-the-Shelf: An Astounding Baseline for Recognition
TLDR
A series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13 suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.
...
...