TY - JOUR
T1 - Transferable Feature Selection for Unsupervised Domain Adaptation
AU - Yan, Yuguang
AU - Wu, Hanrui
AU - Ye, Yuzhong
AU - Bi, Chaoyang
AU - Lu, Min
AU - Liu, Dapeng
AU - Wu, Qingyao
AU - Ng, Michael K.
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2022/11/1
Y1 - 2022/11/1
N2 - Domain adaptation aims at extracting knowledge from auxiliary source domains to assist the learning task in a target domain. In classification problems, since the distributions of the source and target domains are different, directly using source data to build a classifier for the target domain may hamper the classification performance on the target data. Fortunately, in many tasks, there can be some features that are transferable, i.e., the source and target domains share similar properties. On the other hand, it is common that the source data contain noisy features which may degrade the learning performance in the target domain. This issue, however, is barely studied in existing works. In this paper, we propose to find a feature subset that is transferable across the source and target domains. As a result, the domain discrepancy measured on the selected features can be reduced. Moreover, we seek to find the most discriminative features for classification. To achieve the above goals, we formulate a new sparse learning model that is able to jointly reduce the domain discrepancy and select informative features for classification. We develop two optimization algorithms to address the derived learning problem. Extensive experiments on real-world data sets demonstrate the effectiveness of the proposed method.
AB - Domain adaptation aims at extracting knowledge from auxiliary source domains to assist the learning task in a target domain. In classification problems, since the distributions of the source and target domains are different, directly using source data to build a classifier for the target domain may hamper the classification performance on the target data. Fortunately, in many tasks, there can be some features that are transferable, i.e., the source and target domains share similar properties. On the other hand, it is common that the source data contain noisy features which may degrade the learning performance in the target domain. This issue, however, is barely studied in existing works. In this paper, we propose to find a feature subset that is transferable across the source and target domains. As a result, the domain discrepancy measured on the selected features can be reduced. Moreover, we seek to find the most discriminative features for classification. To achieve the above goals, we formulate a new sparse learning model that is able to jointly reduce the domain discrepancy and select informative features for classification. We develop two optimization algorithms to address the derived learning problem. Extensive experiments on real-world data sets demonstrate the effectiveness of the proposed method.
KW - Domain adaptation
KW - feature selection
KW - sparse learning model
KW - transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85101746624&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2021.3060037
DO - 10.1109/TKDE.2021.3060037
M3 - Journal article
AN - SCOPUS:85101746624
SN - 1041-4347
VL - 34
SP - 5536
EP - 5551
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 11
ER -