TY - JOUR
T1 - Double-Domain Imaging and Adaption for Person Re-Identification
AU - Zhou, Shuren
AU - Luo, Peng
AU - Jain, Deepak Kumar
AU - Lan, Xiangyuan
AU - Zhang, Yudong
N1 - Funding Information:
This work was supported in part by the Scientific Research Fund of Hunan Provincial Education Department of China under Project 17A007, in part by the Teaching Reform and Research Project of Hunan Province of China under Project JG1615, in part by the Key Laboratory of Intelligent Air-Ground Cooperative Control for Universities in Chongqing and the Key Laboratory of Industrial IoT and Networked Control, Ministry of Education, College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China, and in part by Hong Kong Baptist University Tier 1 Grant.
PY - 2019/7/24
Y1 - 2019/7/24
N2 - Person re-identification (re-ID) performance has been significantly boosted in recent works, but re-ID model trained on one dataset usually cannot work effectively on another one. To address this problem, we proposed a novel framework-double-domain translation generative adversarial network (DTGAN) that can train images between two domains and generalize the model trained on one domain well to another domain. We divide this paper into two steps. First, the source images are translated in an unsupervised manner, and the translated images retain the style of target images and the ID labels in the source domain. Second, the translated images are used as the training data for supervised feature learning. Besides, with the purpose of moderating the influence of noise, we employ the strategy of label smoothing regularization (LSR). In our experiments, we observe that the images generated by the DTGAN are of high quality and more appropriate for domain adaption. In addition, the re-ID accuracy of the DTGAN is competitive to the state-of-the-art methods on Market-1501 and DukeMTMC-reID.
AB - Person re-identification (re-ID) performance has been significantly boosted in recent works, but re-ID model trained on one dataset usually cannot work effectively on another one. To address this problem, we proposed a novel framework-double-domain translation generative adversarial network (DTGAN) that can train images between two domains and generalize the model trained on one domain well to another domain. We divide this paper into two steps. First, the source images are translated in an unsupervised manner, and the translated images retain the style of target images and the ID labels in the source domain. Second, the translated images are used as the training data for supervised feature learning. Besides, with the purpose of moderating the influence of noise, we employ the strategy of label smoothing regularization (LSR). In our experiments, we observe that the images generated by the DTGAN are of high quality and more appropriate for domain adaption. In addition, the re-ID accuracy of the DTGAN is competitive to the state-of-the-art methods on Market-1501 and DukeMTMC-reID.
KW - deep learning
KW - domain adaption
KW - Generative adversarial network
KW - person re-identification
UR - http://www.scopus.com/inward/record.url?scp=85079347876&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2019.2930865
DO - 10.1109/ACCESS.2019.2930865
M3 - Journal article
AN - SCOPUS:85079347876
SN - 2169-3536
VL - 7
SP - 103336
EP - 103345
JO - IEEE Access
JF - IEEE Access
M1 - 8771160
ER -