TY - GEN
T1 - Modality-aware collaborative learning for visible thermal person re-identification
AU - Ye, Mang
AU - LAN, Xiangyuan
AU - Leng, Qingming
N1 - Funding Information:
This work is partially supported by Hong Kong Baptist University Tier 1 Grant and National Natural Science Foundation of China (61562048)
PY - 2019/10/15
Y1 - 2019/10/15
N2 - Visible thermal person re-identification (VT-ReID) is a cross-modality pedestrian retrieval problem, which automatically searches persons between day-time visible images and night-time thermal images. Despite the extensive progress in single-modality ReID, the cross-modality pedestrian retrieval problem has limited attention due to its challenges in modality discrepancy and large intra-class variations across cameras. Existing cross-modality ReID methods usually solve this problem by learning cross-modality feature representations with modality-sharable classifier. However, this learning strategy may lose discriminative information in different modalities. In this paper, we propose a novel modality-aware collaborative (MAC) learning method on top of a two-stream network for VT-ReID, which handles the modality-discrepancy in both feature level and classifier level. In feature level, it handles the modality discrepancy by a two-stream network with different parameters. In classifier level, it contains two separate modality-specific identity classifiers for two modalities to capture the modality-specific information, and they have the same network architecture but different parameters. In addition, we introduce a collaborative learning scheme, which regularizes the modality-sharable and modality-specific identity classifiers by utilizing the relationship between different classifiers. Extensive experiments on two cross-modality person re-identification datasets demonstrate the superiority of the proposed method, achieving much better performance than the state-of-the-art.
AB - Visible thermal person re-identification (VT-ReID) is a cross-modality pedestrian retrieval problem, which automatically searches persons between day-time visible images and night-time thermal images. Despite the extensive progress in single-modality ReID, the cross-modality pedestrian retrieval problem has limited attention due to its challenges in modality discrepancy and large intra-class variations across cameras. Existing cross-modality ReID methods usually solve this problem by learning cross-modality feature representations with modality-sharable classifier. However, this learning strategy may lose discriminative information in different modalities. In this paper, we propose a novel modality-aware collaborative (MAC) learning method on top of a two-stream network for VT-ReID, which handles the modality-discrepancy in both feature level and classifier level. In feature level, it handles the modality discrepancy by a two-stream network with different parameters. In classifier level, it contains two separate modality-specific identity classifiers for two modalities to capture the modality-specific information, and they have the same network architecture but different parameters. In addition, we introduce a collaborative learning scheme, which regularizes the modality-sharable and modality-specific identity classifiers by utilizing the relationship between different classifiers. Extensive experiments on two cross-modality person re-identification datasets demonstrate the superiority of the proposed method, achieving much better performance than the state-of-the-art.
KW - Collaborative Learning
KW - Cross-modality
KW - Pedestrian Retrieval
KW - Person Re-Identification
UR - http://www.scopus.com/inward/record.url?scp=85074831161&partnerID=8YFLogxK
U2 - 10.1145/3343031.3351043
DO - 10.1145/3343031.3351043
M3 - Conference contribution
AN - SCOPUS:85074831161
T3 - MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
SP - 347
EP - 355
BT - MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
PB - Association for Computing Machinery (ACM)
T2 - 27th ACM International Conference on Multimedia, MM 2019
Y2 - 21 October 2019 through 25 October 2019
ER -