Double-Domain Imaging and Adaption for Person Re-Identification

Shuren Zhou*, Peng Luo, Deepak Kumar Jain, Xiangyuan LAN, Yudong Zhang

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

26 Citations (Scopus)


Person re-identification (re-ID) performance has been significantly boosted in recent works, but re-ID model trained on one dataset usually cannot work effectively on another one. To address this problem, we proposed a novel framework-double-domain translation generative adversarial network (DTGAN) that can train images between two domains and generalize the model trained on one domain well to another domain. We divide this paper into two steps. First, the source images are translated in an unsupervised manner, and the translated images retain the style of target images and the ID labels in the source domain. Second, the translated images are used as the training data for supervised feature learning. Besides, with the purpose of moderating the influence of noise, we employ the strategy of label smoothing regularization (LSR). In our experiments, we observe that the images generated by the DTGAN are of high quality and more appropriate for domain adaption. In addition, the re-ID accuracy of the DTGAN is competitive to the state-of-the-art methods on Market-1501 and DukeMTMC-reID.

Original languageEnglish
Article number8771160
Pages (from-to)103336-103345
Number of pages10
JournalIEEE Access
Publication statusPublished - 24 Jul 2019

Scopus Subject Areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

User-Defined Keywords

  • deep learning
  • domain adaption
  • Generative adversarial network
  • person re-identification


Dive into the research topics of 'Double-Domain Imaging and Adaption for Person Re-Identification'. Together they form a unique fingerprint.

Cite this