TY - JOUR
T1 - Joint Sparse Representation and Robust Feature-Level Fusion for Multi-Cue Visual Tracking
AU - Lan, Xiangyuan
AU - Ma, Andy J.
AU - Yuen, Pong C.
AU - Chellappa, Rama
N1 - Funding Information:
This work was supported by the General Research Fund through the Research Grants Council, Hong Kong, under Grant HKBU 212313.
Publisher copyright:
© 2015 IEEE.
PY - 2015/12
Y1 - 2015/12
N2 - Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
AB - Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
KW - feature fusion
KW - joint sparse representation
KW - Visual tracking
UR - http://www.scopus.com/inward/record.url?scp=84959491321&partnerID=8YFLogxK
U2 - 10.1109/TIP.2015.2481325
DO - 10.1109/TIP.2015.2481325
M3 - Journal article
AN - SCOPUS:84959491321
SN - 1057-7149
VL - 24
SP - 5826
EP - 5841
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
IS - 12
M1 - 7274352
ER -