Learning modality-consistency feature templates: A robust RGB-infrared tracking system

Xiangyuan Lan, Mang Ye, Rui Shao, Bineng Zhong, Pong Chi Yuen*, Huiyu Zhou

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

103 Citations (Scopus)
61 Downloads (Pure)


With a large number of video surveillance systems installed for the requirement from industrial security, the task of object tracking, which aims to locate objects of interest in videos, is very important. Although numerous tracking algorithms for RGB videos have been developed in the decade, the tracking performance and robustness of these systems may be degraded dramatically when the information from RGB video is unreliable (e.g., poor illumination conditions or very low resolution). To address this issue, this paper presents a new tracking system, which aims to combine the information from RGB and infrared modalities for object tracking. The proposed tracking systems is based on our proposed machine learning model. Particularly, the learning model can alleviate the modality discrepancy issue under the proposed modality consistency constraint from both representation patterns and discriminability, and generate discriminative feature templates for collaborative representations and discrimination in heterogeneous modalities. Experiments on a variety of challenging RGB-infrared videos demonstrate the effectiveness of the proposed algorithm.

Original languageEnglish
Pages (from-to)9887-9897
Number of pages11
JournalIEEE Transactions on Industrial Electronics
Issue number12
Publication statusPublished - Dec 2019

Scopus Subject Areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

User-Defined Keywords

  • Multimodal sensor fusion
  • tracking system
  • video surveillance system


Dive into the research topics of 'Learning modality-consistency feature templates: A robust RGB-infrared tracking system'. Together they form a unique fingerprint.

Cite this