Abstract
Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
Original language | English |
---|---|
Article number | 7274352 |
Pages (from-to) | 5826-5841 |
Number of pages | 16 |
Journal | IEEE Transactions on Image Processing |
Volume | 24 |
Issue number | 12 |
Early online date | 23 Sept 2015 |
DOIs | |
Publication status | Published - Dec 2015 |
Scopus Subject Areas
- Software
- Computer Graphics and Computer-Aided Design
User-Defined Keywords
- feature fusion
- joint sparse representation
- Visual tracking