SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation

@article{Wang2018SINTRV,
  title={SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation},
  author={Xiao Wang and Chenglong Li and Bin Luo and Jin Tang},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={4864-4873}
}
  • Xiao Wang, Chenglong Li, +1 author Jin Tang
  • Published 1 June 2018
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Existing visual trackers are easily disturbed by occlusion, blur and large deformation. [...] Key Method Specifically speaking, we assume the target objects all lie on a manifold, hence, we introduce the positive samples generation network (PSGN) to sampling massive diverse training data through traversing over the constructed target object manifold. The generated diverse target object images can enrich the training dataset and enhance the robustness of visual trackers.Expand
Learning Target-aware Attention for Robust Tracking with Conditional Adversarial Network
TLDR
The proposed approach is efficient and effective, needs small amount of training data, and improves the tracking-by-detection framework significantly, and provides improved robustness for fast motion, scale variation as well as heavy occlusion. Expand
Adversarial Learning-based Data Augmentation for Rotation-robust Human Tracking
TLDR
A novel adversarial learning-based hard positives generation method is presented and embedded into the multi-domain network (MDNet)-based tracking framework, designed to be able to generate hard positive samples with more diversity and some degree of motion blur and pose direction changes. Expand
Adversarial Feature Sampling Learning for Efficient Visual Tracking
TLDR
This article aims to develop a fast and accurate tracking method by adversarial feature sampling learning (AFSL), which gets samples by sampling in the feature space rather than on raw images to reduce computation. Expand
Localization-Aware Meta Tracker Guided With Adversarial Features
TLDR
This paper designs a novel intersection over union guided method to effectively balance the problem of classification and localization accuracy and creatively use adversarial features during offline training phase to improve the robustness of the classifier. Expand
Hallucinated Adversarial Learning for Robust Visual Tracking
TLDR
The hallucinated adversarial tracker (HAT) is proposed, which jointly optimizes AH with an online classifier (e.g., MDNet) in an end-to-end manner and achieves the state-of-the-art performance. Expand
Learning to Adversarially Blur Visual Object Tracking
TLDR
This work explores the robustness of visual object trackers against motion blur from a new angle, i.e., adversarial blur attack (ABA), and designs and trains a joint adversarial motion and accumulation predictive network (JAMANet) with the guidance of OP-ABA, which is able to efficiently estimate the adversarialmotion and accumulation parameters in a one-step way. Expand
SAT: Single-Shot Adversarial Tracker
TLDR
A lightweight convolutional neural network-based generator, which fuses multilayer feature maps to accurately generate the target probability map (TPM) for tracking, and an adversarial learning framework is presented to more effectively train the generator. Expand
Foreground Information Guidance for Siamese Visual Tracking
TLDR
This paper focuses on modifying the Siamese tracker by enriching the positive pairs and taking further advantage of the foreground information, and proposes an improved feature information fusion to update the template so that the tracker can adapt to the drastic appearance changes. Expand
Spatial-aware Online Adversarial Perturbations Against Visual Object Tracking
TLDR
The online incremental attack (OIA) is proposed that performs spatial-temporal sparse incremental perturbations online and makes the adversarial attack less perceptible, making it much more efficient than the basic attacks. Expand
Explicitly Modeling the Discriminability for Instance-Aware Visual Object Tracking
TLDR
A novel Instance-Aware Tracker to explicitly excavate the discriminability of feature representations, which improves the classical visual tracking pipeline with an instance-level classifier and introduces a contrastive learning mechanism to formulate the classification task. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 51 REFERENCES
A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection
TLDR
This paper proposes to learn an adversarial network that generates examples with occlusions and deformations, the goal of the adversary is to generate examples that are difficult for the object detector to classify and both the original detector and adversary are learned in a joint manner. Expand
Fully-Convolutional Siamese Networks for Object Tracking
TLDR
A basic tracking algorithm is equipped with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video and achieves state-of-the-art performance in multiple benchmarks. Expand
Learning Spatially Regularized Correlation Filters for Visual Tracking
TLDR
The proposed SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples, and an optimization strategy is proposed, based on the iterative Gauss-Seidel method, for efficient online learning. Expand
Siamese Instance Search for Tracking
TLDR
It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, suffices to reach state-of-the-art performance. Expand
Hierarchical Convolutional Features for Visual Tracking
TLDR
This paper adaptively learn correlation filters on each convolutional layer to encode the target appearance and hierarchically infer the maximum response of each layer to locate targets. Expand
The Visual Object Tracking VOT2016 Challenge Results
TLDR
The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment. Expand
The Visual Object Tracking VOT2013 Challenge Results
TLDR
The evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset are presented, offering a more systematic comparison of the trackers. Expand
Learning to Track at 100 FPS with Deep Regression Networks
TLDR
This work proposes a method for offline training of neural networks that can track novel objects at test-time at 100 fps, which is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Expand
The Visual Object Tracking VOT 2016 Challenge Results
The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply prelearned models of object appearance. Results of 70 trackers are presented,Expand
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
  • C. Ledig, Lucas Theis, +6 authors W. Shi
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss. Expand
...
1
2
3
4
5
...