Kernalised Multi-resolution Convnet for Visual Tracking

Abstract

Visual tracking is intrinsically a temporal problem. Discriminative Correlation Filters (DCF) have demonstrated excellent performance for high-speed generic visual object tracking. Built upon their seminal work, there has been a plethora of recent improvements relying on convolutional neural network (CNN) pretrained on ImageNet as a feature extractor for visual tracking. However, most of their works relying on ad hoc analysis to design the weights for different layers either using boosting or hedging techniques as an ensemble tracker. In this paper, we go beyond the conventional DCF framework and propose a Kernalised Multi-resolution Convnet (KMC) formulation that utilises hierarchical response maps to directly output the target movement. When directly deployed the learnt network to predict the unseen challenging UAV tracking dataset without any weight adjustment, the proposed model consistently achieves excellent tracking performance. Moreover, the transfered multireslution CNN renders it possible to be integrated into the RNN temporal learning framework, therefore opening the door on the end-to-end temporal deep learning (TDL) for visual tracking.

DOI: 10.1109/CVPRW.2017.278

6 Figures and Tables

Cite this paper

@article{Wu2017KernalisedMC, title={Kernalised Multi-resolution Convnet for Visual Tracking}, author={Di Wu and Wenbin Zou and Xia Li and Yong Zhao}, journal={2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, year={2017}, pages={2241-2248} }