Robust visual tracking using dynamic feature weighting based on multiple dictionary learning

Renfei Liu, Xiangyuan LAN, Pong Chi YUEN, G. C. Feng

Research output: Chapter in book/report/conference proceedingConference contributionpeer-review

12 Citations (Scopus)

Abstract

Using multiple features in appearance modeling has shown to be effective for visual tracking. In this paper, we dynamically measured the importance of different features and proposed a robust tracker with the weighted features. By doing this, the dictionaries are improved in both reconstructive and discriminative way. We extracted multiple features of the target, and obtained multiple sparse representations, which plays an essential role in the classification issue. After learning independent dictionaries for each feature, we then implement weights to each feature dynamically, with which we select the best candidate by a weighted joint decision measure. Experiments have shown that our method outperforms several recently proposed trackers.

Original languageEnglish
Title of host publication2016 24th European Signal Processing Conference, EUSIPCO 2016
PublisherEuropean Signal Processing Conference, EUSIPCO
Pages2166-2170
Number of pages5
ISBN (Electronic)9780992862657
DOIs
Publication statusPublished - 28 Nov 2016
Event24th European Signal Processing Conference, EUSIPCO 2016 - Budapest, Hungary
Duration: 28 Aug 20162 Sep 2016

Publication series

NameEuropean Signal Processing Conference
Volume2016-November
ISSN (Print)2219-5491

Conference

Conference24th European Signal Processing Conference, EUSIPCO 2016
Country/TerritoryHungary
CityBudapest
Period28/08/162/09/16

Scopus Subject Areas

  • Signal Processing
  • Electrical and Electronic Engineering

User-Defined Keywords

  • Dictionary learning
  • Feature weighting
  • Sparse coding
  • Visual tracking

Fingerprint

Dive into the research topics of 'Robust visual tracking using dynamic feature weighting based on multiple dictionary learning'. Together they form a unique fingerprint.

Cite this