• Corpus ID: 52183126

Open Set Adversarial Examples

@article{Zheng2018OpenSA,
  title={Open Set Adversarial Examples},
  author={Zhedong Zheng and Liang Zheng and Zhilan Hu and Yi Yang},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.02681}
}
Adversarial examples in recent works target at closed set recognition systems, in which the training and testing classes are identical. In real-world scenarios, however, the testing classes may have limited, if any, overlap with the training classes, a problem named open set recognition. To our knowledge, the community does not have a specific design of adversarial examples targeting at this practical setting. Arguably, the new setting compromises traditional closed set attack methods in two… 

Figures and Tables from this paper

Recent Advances in Open Set Recognition: A Survey

This paper provides a comprehensive survey of existing open set recognition techniques covering various aspects ranging from related definitions, representations of models, datasets, evaluation criteria, and algorithm comparisons to highlight the limitations of existing approaches and point out some promising subsequent research directions.

Generating Adaptive Targeted Adversarial Examples for Content-Based Image Retrieval

The proposed Adaptive Targeted Attack Generative Adversarial Network (ATA-GAN) is a GAN-based model with a generator and discriminator that extends the attack adaptability by exploiting the target images as conditional input for the generative model.

Discriminator-free Generative Adversarial Attack

This work finds that the discriminator could be not necessary for generative-based adversarial attack, and proposes the Symmetric Saliency-based Auto-Encoder (SSAE) to generate the perturbations, which is composed of the saliency map module and the angle-norm disentanglement of the features module.

Multi-Expert Adversarial Attack Detection in Person Re-identification Using Context Inconsistency

As the first adversarial attack detection approach for ReID, MEAAD effectively detects various adversarial attacks and achieves high ROC-AUC (over 97.5%).

Universal Adversarial Perturbations Against Person Re-Identification

This paper attacks the re-ID models using universal adversarial perturbations (UAPs), which are especially dangerous to the surveillance systems because it could fool most pedestrian images with a little overhead, and proposes an effective method to train UAPs against person re-IDs from the global list-wise perspective.

QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval

  • Xiaodan LiJinfeng Li Hui Xue
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
This paper makes the first attempt in Query-based Attack against Image Retrieval (QAIR), to completely subvert the top-k retrieval results by measuring the set similarity on theTop-K retrieval results before and after attacks and guide the gradient optimization.

Targeted Mismatch Adversarial Attack: Query With a Flower to Retrieve the Tower

This work introduces the concept of targeted mismatch attack for deep learning based retrieval systems to generate an adversarial image to conceal the query image and shows successful attacks to partially unknown systems.

Learning to Attack Real-World Models for Person Re-identification via Virtual-Guided Meta-Learning

This study argues that learning powerful attackers with high universality that works well on unseen domains is an important step in promoting the robustness of re-ID systems and introduces a novel universal attack algorithm called ``MetaAttack'' for person re- ID.

Part-Based Feature Squeezing To Detect Adversarial Examples in Person Re-Identification Networks

Experimental results show that the proposed method can effectively detect the adversarial examples, and has the potential to avoid significant decreases in person ReID performance caused by adversarialExamples.

Wasserstein Metric Attack on Person Re-identification

This work is the first to propose the Wasserstein metric towards generating adversarial samples for ReID task and outperforms the SOTA attack methods by 17.2% in white-box attacks and 14.4% in black-box at-tacks.

References

SHOWING 1-10 OF 41 REFERENCES

Adversarial examples in the physical world

It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.

Adversarial Examples for Semantic Segmentation and Object Detection

This paper proposes a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection, and finds that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks.

Simple Black-Box Adversarial Perturbations for Deep Networks

This work focuses on deep convolutional neural networks and demonstrates that adversaries can easily craft adversarial examples even without any internal knowledge of the target network.

The Limitations of Deep Learning in Adversarial Settings

This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.

Open Set Domain Adaptation

This work learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset.

Delving into Transferable Adversarial Examples and Black-box Attacks

This work is the first to conduct an extensive study of the transferability over large models and a large scale dataset, and it is also theFirst to study the transferabilities of targeted adversarial examples with their target labels.

Robust Physical-World Attacks on Deep Learning Visual Classification

This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.

Open set face recognition using transduction

  • Fayin LiH. Wechsler
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2005
Open set TCM-kNN (transduction confidence machine-k nearest neighbors), suitable for multiclass authentication operational scenarios that have to include a rejection option for classes never enrolled in the gallery, is shown to be suitable for PSEI (pattern specific error inhomogeneities) error analysis in order to identify difficult to recognize faces.

Boosting Adversarial Attacks with Momentum

A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.

Practical Black-Box Attacks against Machine Learning

This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.