DeepTrack: Learning Discriminative Feature Representations Online for Robust Visual Tracking

@article{Li2015DeepTrackLD,
  title={DeepTrack: Learning Discriminative Feature Representations Online for Robust Visual Tracking},
  author={Hanxi Li and Yi Ci Li and Fatih Murat Porikli},
  journal={IEEE Transactions on Image Processing},
  year={2015},
  volume={25},
  pages={1834-1848}
}
Deep neural networks, albeit their great success on feature learning in various computer vision tasks, are usually considered as impractical for online visual tracking, because they require very long training time and a large number of training samples. In this paper, we present an efficient and very robust tracking algorithm using a single convolutional neural network (CNN) for learning effective feature representations of the target object in a purely online manner. Our contributions are… CONTINUE READING

Similar Papers

Results and Topics from this paper.

Key Quantitative Results

  • The performance gap between our method and the reported best result in the literature are 6% for the TP measure: our method achieves 83% accuracy while the best state-of-the-art is 77% (TGPR method). For the TSR measure, our method is 8% better than the existing methods: our method gives 63% accuracy while the best state-of-theart is 55% (SCM method).

Citations

Publications citing this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 39 REFERENCES

The Visual Object Tracking VOT2013 Challenge Results

  • ICCV Workshops
  • 2013
VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

Struck: Structured Output Tracking with Kernels.

  • IEEE transactions on pattern analysis and machine intelligence
  • 2016
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Online Object Tracking: A Benchmark

  • 2013 IEEE Conference on Computer Vision and Pattern Recognition
  • 2013
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

High-Speed Tracking with Kernelized Correlation Filters

  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2015
VIEW 3 EXCERPTS