Physical Adversarial Textures That Fool Visual Object Tracking

@article{Wiyatno2019PhysicalAT,
  title={Physical Adversarial Textures That Fool Visual Object Tracking},
  author={Rey Reza Wiyatno and Anqi Xu},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={4821-4830}
}
  • R. Wiyatno, Anqi Xu
  • Published 24 April 2019
  • Computer Science, Engineering
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
We present a method for creating inconspicuous-looking textures that, when displayed as posters in the physical world, cause visual object tracking systems to become confused. As a target being visually tracked moves in front of such a poster, its adversarial texture makes the tracker lock onto it, thus allowing the target to evade. This adversarial attack evaluates several optimization strategies for fooling seldom-targeted regression models: non-targeted, targeted, and a newly-coined family… Expand
Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial Attacks for Online Visual Object Trackers
TLDR
This paper proposes a framework to generate a single temporallytransferable adversarial perturbation from the object template image only, which can be added to every search image, which comes at virtually no cost, and still successfully fool the tracker. Expand
Spatial-aware Online Adversarial Perturbations Against Visual Object Tracking
TLDR
The online incremental attack (OIA) is proposed that performs spatial-temporal sparse incremental perturbations online and makes the adversarial attack less perceptible, making it much more efficient than the basic attacks. Expand
Towards Universal Physical Attacks on Single Object Tracking
TLDR
The maximum textural discrepancy (MTD) is designed, a resolution-invariant and target location-independent feature de-matching loss that distills global textural information of the template and search images at hierarchical feature scales prior to performing feature attacks. Expand
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking
TLDR
A decision-based black-box attack method for visual object tracking that sequentially generates perturbations based on the predicted IoU scores from both current and historical frames is proposed and validated on state-of-the-art deep trackers. Expand
One-Shot Adversarial Attacks on Visual Tracking With Dual Attention
TLDR
This paper proposes a novel one-shot adversarial attack method to generate adversarial examples for free-model single object tracking, where merely adding slight perturbations on the target patch in the initial frame causes state-of-the-art trackers to lose the target in subsequent frames. Expand
Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors
TLDR
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes that is shown to fool state-of-the-art deep object detectors robustly under varying views, potentially leading to an attacking scheme that is persistently strong in the physical world. Expand
Cooling-Shrinking Attack: Blinding the Tracker With Imperceptible Noises
TLDR
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers using a carefully designed adversarial loss, which can simultaneously cool hot regions where the target exists on the heatmaps and force the predicted bounding box to shrink, making the tracked target invisible to trackers. Expand
Universal Physical Camouflage Attacks on Object Detectors
TLDR
This paper proposes to learn an adversarial pattern to effectively attack all instances belonging to the same object category, referred to as Universal Physical Camouflage Attack (UPC), which crafts camouflage by jointly fooling the region proposal network, as well as misleading the classifier and the regressor to output errors. Expand
Hijacking Tracker: A Powerful Adversarial Attack on Visual Tracking
TLDR
This paper proposes to add slight adversarial perturbations to the input image by an inconspicuous but powerful attack strategy—hijacking algorithm that misleads trackers in two aspects: one is shape hijacking that changes the shape of the model output; the other is position hijacked that gradually pushes the output to any position in the image frame. Expand
AdvDrop: Adversarial Attack to DNNs by Dropping Information
TLDR
This proposed work explores the adversarial robustness of DNN models in a novel perspective by dropping imperceptible details to craft adversarial examples, and demonstrates the effectiveness of this novel adversarial attack, named AdvDrop. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 37 REFERENCES
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer
TLDR
This work proposes a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation, and presents a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Expand
Adversarial Attacks Beyond the Image Space
TLDR
Though image-space adversaries can be interpreted as per-pixel albedo change, it is verified that they cannot be well explained along these physically meaningful dimensions, which often have a non-local effect. Expand
Synthesizing Robust Adversarial Examples
TLDR
The existence of robust 3D adversarial objects is demonstrated, and the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations is presented, which synthesizes two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. Expand
Adversarial Patch
TLDR
A method to create universal, robust, targeted adversarial image patches in the real world, which can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class. Expand
A General Framework for Adversarial Examples with Objectives
TLDR
This article proposes adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives, and demonstrates the ability of AGNs to accommodate a wide range of objectives, including imprecise ones difficult to model, in two application domains. Expand
Fully-Convolutional Siamese Networks for Object Tracking
TLDR
A basic tracking algorithm is equipped with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video and achieves state-of-the-art performance in multiple benchmarks. Expand
Robust Physical-World Attacks on Deep Learning Visual Classification
TLDR
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Expand
Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
TLDR
This paper shows how to create eyeglasses that, when worn, can succeed in targeted or untargeted attacks while improving on previous work in one or more of three facets: inconspicuousness to onlooking observers, robustness of the attack against proposed defenses, and scalability in the sense of decoupling eyeglass creation from the subject who will wear them. Expand
Houdini : Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples
Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed toExpand
Adversarial Diversity and Hard Positive Generation
TLDR
A new psychometric perceptual adversarial similarity score (PASS) measure for quantifying adversarial images, the notion of hard positive generation is introduced, and a novel hot/cold approach for adversarial example generation is presented, which provides multiple possible adversarial perturbations for every single image. Expand
...
1
2
3
4
...