TY - JOUR
T1 - Class-Wise Contrastive Prototype Learning for Semi-Supervised Classification Under Intersectional Class Mismatch
AU - Li, Mingyu
AU - Zhou, Tao
AU - Han, Bo
AU - Liu, Tongliang
AU - Liang, Xinkai
AU - Zhao, Jiajia
AU - Gong, Chen
N1 - This work was supported in part by the NSF of China under Grant 62336003, Grant 12371510, Grant 62172228, and Grant 62376235, in part by the NSF of Jiangsu Province under Grant BZ2021013, in part by the NSF for Distinguished Young Scholar of Jiangsu Province under Grant BK20220080, and in part by the Fundamental Research Funds for the Central Universities under Grant 30920032202 and Grant 30921013114.
Publisher Copyright:
© 2024 IEEE
PY - 2024/3/18
Y1 - 2024/3/18
N2 - Traditional Semi-Supervised Learning (SSL) classification methods focus on leveraging unlabeled data to improve the model performance under the setting where labeled set and unlabeled set share the same classes. Nevertheless, the above-mentioned setting is often inconsistent with many real-world circumstances. Practically, both the labeled set and unlabeled set often hold some individual classes, leading to an intersectional class-mismatch setting for SSL. Under this setting, existing SSL methods are often subject to performance degradation attributed to these individual classes. To solve the problem, we propose a Class-wise Contrastive Prototype Learning (CCPL) framework, which can properly utilize the unlabeled data to improve the SSL classification performance. Specifically, we employ a supervised prototype learning strategy and a class-wise contrastive separation strategy to construct a prototype for each known class. To reduce the influence of the individual classes in unlabeled set (i.e., out-of-distribution classes), each unlabeled example can be weighted reasonably based on the prototypes during classifier training, which helps to weaken the negative influence caused by out-of-distribution classes. To reduce the influence of the individual classes in labeled set (i.e., private classes), we present a private assignment suppression strategy to suppress the improper assignments of unlabeled examples to the private classes with the help of the prototypes. Experimental results on four benchmarks and one real-world dataset show that our CCPL has a clear advantage over fourteen representative SSL methods as well as two supervised learning methods under the intersectional class-mismatch setting.
AB - Traditional Semi-Supervised Learning (SSL) classification methods focus on leveraging unlabeled data to improve the model performance under the setting where labeled set and unlabeled set share the same classes. Nevertheless, the above-mentioned setting is often inconsistent with many real-world circumstances. Practically, both the labeled set and unlabeled set often hold some individual classes, leading to an intersectional class-mismatch setting for SSL. Under this setting, existing SSL methods are often subject to performance degradation attributed to these individual classes. To solve the problem, we propose a Class-wise Contrastive Prototype Learning (CCPL) framework, which can properly utilize the unlabeled data to improve the SSL classification performance. Specifically, we employ a supervised prototype learning strategy and a class-wise contrastive separation strategy to construct a prototype for each known class. To reduce the influence of the individual classes in unlabeled set (i.e., out-of-distribution classes), each unlabeled example can be weighted reasonably based on the prototypes during classifier training, which helps to weaken the negative influence caused by out-of-distribution classes. To reduce the influence of the individual classes in labeled set (i.e., private classes), we present a private assignment suppression strategy to suppress the improper assignments of unlabeled examples to the private classes with the help of the prototypes. Experimental results on four benchmarks and one real-world dataset show that our CCPL has a clear advantage over fourteen representative SSL methods as well as two supervised learning methods under the intersectional class-mismatch setting.
KW - Contrastive learning
KW - intersectional class mismatch
KW - private assignment suppression
KW - prototype learning
KW - semi-supervised learning
UR - https://ieeexplore.ieee.org/document/10472075/
U2 - 10.1109/TMM.2024.3377123
DO - 10.1109/TMM.2024.3377123
M3 - Journal article
SN - 1941-0077
VL - 26
SP - 8145
EP - 8156
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
ER -