TY - JOUR
T1 - Contrastive Learning Assisted-Alignment for Partial Domain Adaptation
AU - Yang, Cuie
AU - Cheung, Yiu Ming
AU - Ding, Jinliang
AU - Tan, Kay Chen
AU - Xue, Bing
AU - Zhang, Mengjie
N1 - Funding information:
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61672444, Grant 61988101, Grant 61876162, and Grant 62161160338; in part by the NSFC/Research Grants Council (RGC) Joint Research Scheme under Grant N_HKBU214/21; in part by the General Research Fund of RGC under Grant 12201321; in part by Hong Kong Baptist University (HKBU) under Grant RC-FNRA-IG/ 18-19/SCI/03 and Grant RC-IRCMs/18-19/SCI/01; in part by the Innovation and Technology Fund of Innovation and Technology Commission of the Gov- ernment, Hong Kong, under Project ITS/339/18; in part by the National Key Research and Development Program of China under Grant 2018YFB1701104; and in part by the Science and Technology Program of Liaoning Province under Grant 2020JH2/10500001 and Grant 2020JH1/10100008.
Publisher copyright:
© 2022 IEEE.
PY - 2023/10
Y1 - 2023/10
N2 - This work addresses unsupervised partial domain adaptation (PDA), in which classes in the target domain are a subset of the source domain. The key challenges of PDA are how to leverage source samples in the shared classes to promote positive transfer and filter out the irrelevant source samples to mitigate negative transfer. Existing PDA methods based on adversarial DA do not consider the loss of class discriminative representation. To this end, this article proposes a contrastive learning-assisted alignment (CLA) approach for PDA to jointly align distributions across domains for better adaptation and to reweight source instances to reduce the contribution of outlier instances. A contrastive learning-assisted conditional alignment (CLCA) strategy is presented for distribution alignment. CLCA first exploits contrastive losses to discover the class discriminative information in both domains. It then employs a contrastive loss to match the clusters across the two domains based on adversarial domain learning. In this respect, CLCA attempts to reduce the domain discrepancy by matching the class-conditional and marginal distributions. Moreover, a new reweighting scheme is developed to improve the quality of weights estimation, which explores information from both the source and the target domains. Empirical results on several benchmark datasets demonstrate that the proposed CLA outperforms the existing state-of-the-art PDA methods.
AB - This work addresses unsupervised partial domain adaptation (PDA), in which classes in the target domain are a subset of the source domain. The key challenges of PDA are how to leverage source samples in the shared classes to promote positive transfer and filter out the irrelevant source samples to mitigate negative transfer. Existing PDA methods based on adversarial DA do not consider the loss of class discriminative representation. To this end, this article proposes a contrastive learning-assisted alignment (CLA) approach for PDA to jointly align distributions across domains for better adaptation and to reweight source instances to reduce the contribution of outlier instances. A contrastive learning-assisted conditional alignment (CLCA) strategy is presented for distribution alignment. CLCA first exploits contrastive losses to discover the class discriminative information in both domains. It then employs a contrastive loss to match the clusters across the two domains based on adversarial domain learning. In this respect, CLCA attempts to reduce the domain discrepancy by matching the class-conditional and marginal distributions. Moreover, a new reweighting scheme is developed to improve the quality of weights estimation, which explores information from both the source and the target domains. Empirical results on several benchmark datasets demonstrate that the proposed CLA outperforms the existing state-of-the-art PDA methods.
KW - Class-conditional alignment
KW - contrastive learning
KW - discriminative learning
KW - partial domain adaptation (PDA)
KW - transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85124750025&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2022.3145034
DO - 10.1109/TNNLS.2022.3145034
M3 - Journal article
SN - 2162-237X
VL - 34
SP - 7621
EP - 7634
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 10
ER -