We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

A ranking based attention approach for visual tracking.

Peng, S. and Kamata, S. and Breckon, T.P. (2019) 'A ranking based attention approach for visual tracking.', in 2019 IEEE International Conference on Image Processing (ICIP) ; proceedings. Piscataway, NJ: IEEE, pp. 3073-3077.


Correlation filters (CF) combined with pre-trained convolutional neural network (CNN) feature extractors have shown an admirable accuracy and speed in visual object tracking. However, existing CNN-CF based methods still suffer from the background interference and boundary effects, even when a cosine window is introduced. This paper proposes a ranking based or guided attention approach which can reduce background interference with only forward propagation. This ranking stores several convolution kernels and scores them. Subsequently, a convolutional Long Short Time Memory network (ConvLSTM) is used to update this ranking, which makes it more robust to the variation and occlusion. Moreover, a part-based multi-channel convolutional tracker is proposed to obtain the final response map. Our extensive experiments on established benchmark datasets show comparable performance against contemporary tracking approaches.

Item Type:Book chapter
Full text:(AM) Accepted Manuscript
Download PDF
Publisher Web site:
Publisher statement:© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Date accepted:30 April 2019
Date deposited:05 June 2019
Date of first online publication:September 2019
Date first made open access:12 November 2019

Save or Share this output

Look up in GoogleScholar