Unsupervised Domain Adaptation in the Wild via Disentangling Representation Learning

Haoliang Li*, Renjie Wan, Shiqi Wang, Alex C. Kot

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

22 Citations (Scopus)

Abstract

Most recently proposed unsupervised domain adaptation algorithms attempt to learn domain invariant features by confusing a domain classifier through adversarial training. In this paper, we argue that this may not be an optimal solution in the real-world setting (a.k.a. in the wild) as the difference in terms of label information between domains has been largely ignored. As labeled instances are not available in the target domain in unsupervised domain adaptation tasks, it is difficult to explicitly capture the label difference between domains. To address this issue, we propose to learn a disentangled latent representation based on implicit autoencoders. In particular, a latent representation is disentangled into a global code and a local code. The global code is capturing category information via an encoder with a prior, and the local code is transferable across domains, which captures the “style” related information via an implicit decoder. Experimental results on digit recognition, object recognition and semantic segmentation demonstrate the effectiveness of our proposed method.

Original languageEnglish
Pages (from-to)267-283
Number of pages17
JournalInternational Journal of Computer Vision
Volume129
Issue number2
Early online date11 Aug 2020
DOIs
Publication statusPublished - Feb 2021

Scopus Subject Areas

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

User-Defined Keywords

  • Cross-domain
  • In the wild
  • Recognition
  • Segmentation

Fingerprint

Dive into the research topics of 'Unsupervised Domain Adaptation in the Wild via Disentangling Representation Learning'. Together they form a unique fingerprint.

Cite this