Unsupervised visual domain adaptation aims to train a classifier that works well on a target domain given labelled source samples and unlabelled target samples. The key issue in unsupervised visual domain adaptation is how to do the feature alignment between source and target domains. Inspired by the adversarial learning in generative adversarial networks, this study proposes a novel adversarial auto-encoder for unsupervised deep domain adaptation. This method incorporates the auto-encoder with the adversarial learning so that the domain similarity and reconstruction information from the decoder can be exploited to facilitate the adversarial domain adaptation in the encoder. Extensive experiments on various visual recognition tasks show that the proposed method performs favourably against or better than competitive state-of-the-art methods.
Scopus Subject Areas
- Signal Processing
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering