TY - JOUR
T1 - Bi-Directional Center-Constrained Top-Ranking for Visible Thermal Person Re-Identification
AU - Ye, Mang
AU - Lan, Xiangyuan
AU - Wang, Zheng
AU - Yuen, Pong Chi
N1 - Funding Information:
Manuscript received December 20, 2018; revised May 29, 2019; accepted May 29, 2019. Date of publication June 6, 2019; date of current version September 16, 2019. This work was supported in part by the Hong Kong Research Grant Council General Research Fund (RGC/HKBU 12200518). The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Clinton Fookes. (Corresponding author: Pong C. Yuen.) M. Ye, X. Lan, and P. C. Yuen are with the Department of Computer Science, Hong Kong Baptist University, Hong Kong (e-mail: [email protected]; [email protected]; xiangyuanlan@ life.hkbu.edu.hk).
PY - 2020/1
Y1 - 2020/1
N2 - Visible thermal person re-identification (VT-REID) is a task of matching person images captured by thermal and visible cameras, which is an extremely important issue in night-time surveillance applications. Existing cross-modality recognition works mainly focus on learning sharable feature representations to handle the cross-modality discrepancies. However, apart from the cross-modality discrepancy caused by different camera spectrums, VT-REID also suffers from large cross-modality and intra-modality variations caused by different camera environments and human poses, and so on. In this paper, we propose a dual-path network with a novel bi-directional dual-constrained top-ranking (BDTR) loss to learn discriminative feature representations. It is featured in two aspects: 1) end-to-end learning without extra metric learning step and 2) the dual-constraint simultaneously handles the cross-modality and intra-modality variations to ensure the feature discriminability. Meanwhile, a bi-directional center-constrained top-ranking (eBDTR) is proposed to incorporate the previous two constraints into a single formula, which preserves the properties to handle both cross-modality and intra-modality variations. The extensive experiments on two cross-modality re-ID datasets demonstrate the superiority of the proposed method compared to the state-of-the-arts.
AB - Visible thermal person re-identification (VT-REID) is a task of matching person images captured by thermal and visible cameras, which is an extremely important issue in night-time surveillance applications. Existing cross-modality recognition works mainly focus on learning sharable feature representations to handle the cross-modality discrepancies. However, apart from the cross-modality discrepancy caused by different camera spectrums, VT-REID also suffers from large cross-modality and intra-modality variations caused by different camera environments and human poses, and so on. In this paper, we propose a dual-path network with a novel bi-directional dual-constrained top-ranking (BDTR) loss to learn discriminative feature representations. It is featured in two aspects: 1) end-to-end learning without extra metric learning step and 2) the dual-constraint simultaneously handles the cross-modality and intra-modality variations to ensure the feature discriminability. Meanwhile, a bi-directional center-constrained top-ranking (eBDTR) is proposed to incorporate the previous two constraints into a single formula, which preserves the properties to handle both cross-modality and intra-modality variations. The extensive experiments on two cross-modality re-ID datasets demonstrate the superiority of the proposed method compared to the state-of-the-arts.
KW - cross-modality
KW - Person re-identification (REID)
KW - top-ranking
KW - visible thermal (VT)
UR - http://www.scopus.com/inward/record.url?scp=85077154443&partnerID=8YFLogxK
U2 - 10.1109/TIFS.2019.2921454
DO - 10.1109/TIFS.2019.2921454
M3 - Journal article
AN - SCOPUS:85077154443
SN - 1556-6013
VL - 15
SP - 407
EP - 419
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -