TY - JOUR
T1 - Domain Adaptive Ensemble Learning
AU - Zhou, Kaiyang
AU - Yang, Yongxin
AU - Qiao, Yu
AU - Xiang, Tao
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/9/17
Y1 - 2021/9/17
N2 - The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain but a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. To deal with unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo labels to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins.
AB - The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain but a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. To deal with unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo labels to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins.
KW - collaborative ensemble learning
KW - Domain adaptation
KW - domain generalization
UR - http://www.scopus.com/inward/record.url?scp=85115122600&partnerID=8YFLogxK
UR - https://ieeexplore.ieee.org/document/9540778/authors#authors
U2 - 10.1109/TIP.2021.3112012
DO - 10.1109/TIP.2021.3112012
M3 - Journal article
C2 - 34534081
AN - SCOPUS:85115122600
SN - 1057-7149
VL - 30
SP - 8008
EP - 8018
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -