Corpus ID: 204800728

Spatial-aware Online Adversarial Perturbations Against Visual Object Tracking

@article{Guo2019SpatialawareOA,
  title={Spatial-aware Online Adversarial Perturbations Against Visual Object Tracking},
  author={Qing Guo and Xiaofei Xie and L. Ma and Zhongguo Li and Wei Feng and Yang Liu},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.08681}
}
Adversarial attacks of deep neural networks have been intensively studied on image, audio, natural language, patch, and pixel classification tasks. Nevertheless, as a typical, while important real-world application, the adversarial attacks of online video object tracking that traces an object's moving trajectory instead of its category are rarely explored. In this paper, we identify a new task for the adversarial attack to visual object tracking: online generating imperceptible perturbations… Expand
Out-of-distribution detection and generalization to enhance fairness in age prediction
TLDR
This work created an out-of-distribution technique which is used to select the data most relevant to the deep neural network's (DNN) task when balancing the data among age, ethnicity, and gender, and shows promising results. Expand
Fairness Matters - A Data-Driven Framework Towards Fair and High Performing Facial Recognition Systems
TLDR
A novel approach to mitigate unfairness and enhance performance of state-of-the-art face recognition algorithms, using distribution-aware dataset curation and augmentation, and a 4-fold increase in fairness towards ethnicity when compared to related work is presented. Expand
Amora: Black-box Adversarial Morphing Attack
TLDR
This work investigates and introduces a new type of adversarial attack to evade FR systems by manipulating facial content, called adversarial morphing attack (a.k.a. Amora), and indicates that a novel black-box adversarial attacked based on local deformation is possible, and is vastly different from additive noise attacks. Expand

References

SHOWING 1-10 OF 49 REFERENCES
VITAL: VIsual Tracking via Adversarial Learning
TLDR
The VITAL algorithm is presented, which uses a generative network to randomly generate masks, which are applied to adaptively dropout input features to capture a variety of appearance changes and identifies the mask that maintains the most robust features of the target objects over a long temporal span. Expand
Transferable Adversarial Attacks for Image and Video Object Detection
TLDR
The proposed method is based on the Generative Adversarial Network (GAN) framework, where it combines the high-level class loss and low-level feature loss to jointly train the adversarial example generator, and can efficiently generate image and video adversarial examples that have better transferability. Expand
Physical Adversarial Textures That Fool Visual Object Tracking
  • R. Wiyatno, Anqi Xu
  • Computer Science, Engineering
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
While the Expectation Over Transformation (EOT) algorithm is used to generate physical adversaries that fool tracking models when imaged under diverse conditions, the impacts of different scene variables are compared to find practical attack setups with high resulting adversarial strength and convergence speed. Expand
Sparse Adversarial Perturbations for Videos
TLDR
An l2,1-norm based optimization algorithm is proposed to compute the sparse adversarial perturbations for videos and chooses the action recognition as the targeted task, and networks with a CNN+RNN architecture as threat models to verify the method. Expand
SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation
TLDR
The positive samples generation network (PSGN) is introduced to sampling massive diverse training data through traversing over the constructed target object manifold and generated diverse target object images can enrich the training dataset and enhance the robustness of visual trackers. Expand
Adversarial Examples for Semantic Segmentation and Object Detection
TLDR
This paper proposes a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection, and finds that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. Expand
Robust Adversarial Perturbation on Deep Proposal-based Models
TLDR
A robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms to universally degrade their performance in a black-box fashion is described. Expand
Learning Dynamic Siamese Network for Visual Object Tracking
TLDR
This paper proposes dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames, and presents elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features. Expand
Fully-Convolutional Siamese Networks for Object Tracking
TLDR
A basic tracking algorithm is equipped with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video and achieves state-of-the-art performance in multiple benchmarks. Expand
Universal Adversarial Perturbations Against Semantic Image Segmentation
TLDR
This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs. Expand
...
1
2
3
4
5
...