Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker

Xiangyuan LAN, Shengping Zhang, Pong Chi YUEN*, Rama Chellappa

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

160 Citations (Scopus)

Abstract

The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.

Original languageEnglish
Article number8119524
Pages (from-to)2022-2037
Number of pages16
JournalIEEE Transactions on Image Processing
Volume27
Issue number4
DOIs
Publication statusPublished - Apr 2018

Scopus Subject Areas

  • Software
  • Computer Graphics and Computer-Aided Design

User-Defined Keywords

  • feature fusion
  • metric learning
  • sparse representation
  • Visual tracking

Cite this