TY - JOUR
T1 - Exploiting counter-examples for active learning with partial labels
AU - Zhang, Fei
AU - Ye, Yunjie
AU - Feng, Lei
AU - Rao, Zhongwen
AU - Zhu, Jieming
AU - Kalander, Marcus
AU - Gong, Chen
AU - Hao, Jianye
AU - Han, Bo
N1 - Publisher Copyright:
© 2024, The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature.
PY - 2024/6
Y1 - 2024/6
N2 - This paper studies a new problem, active learning with partial labels (ALPL). In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process. To address ALPL, we first build an intuitive baseline that can be seamlessly incorporated into existing AL frameworks. Though effective, this baseline is still susceptible to the overfitting, and falls short of the representative partial-label-based samples during the query process. Drawing inspiration from human inference in cognitive science, where accurate inferences can be explicitly derived from counter-examples (CEs), our objective is to leverage this human-like learning pattern to tackle the overfitting while enhancing the process of selecting representative samples in ALPL. Specifically, we construct CEs by reversing the partial labels for each instance, and then we propose a simple but effective WorseNet to directly learn from this complementary pattern. By leveraging the distribution gap between WorseNet and the predictor, this adversarial evaluation manner could enhance both the performance of the predictor itself and the sample selection process, allowing the predictor to capture more accurate patterns in the data. Experimental results on five real-world datasets and four benchmark datasets show that our proposed method achieves comprehensive improvements over ten representative AL frameworks, highlighting the superiority of WorseNet.
AB - This paper studies a new problem, active learning with partial labels (ALPL). In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process. To address ALPL, we first build an intuitive baseline that can be seamlessly incorporated into existing AL frameworks. Though effective, this baseline is still susceptible to the overfitting, and falls short of the representative partial-label-based samples during the query process. Drawing inspiration from human inference in cognitive science, where accurate inferences can be explicitly derived from counter-examples (CEs), our objective is to leverage this human-like learning pattern to tackle the overfitting while enhancing the process of selecting representative samples in ALPL. Specifically, we construct CEs by reversing the partial labels for each instance, and then we propose a simple but effective WorseNet to directly learn from this complementary pattern. By leveraging the distribution gap between WorseNet and the predictor, this adversarial evaluation manner could enhance both the performance of the predictor itself and the sample selection process, allowing the predictor to capture more accurate patterns in the data. Experimental results on five real-world datasets and four benchmark datasets show that our proposed method achieves comprehensive improvements over ten representative AL frameworks, highlighting the superiority of WorseNet.
KW - Active learning
KW - Adversarial learning
KW - Classification
KW - Counter-examples
KW - Partial-label learning
KW - Weakly-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85181742022&partnerID=8YFLogxK
U2 - 10.1007/s10994-023-06485-9
DO - 10.1007/s10994-023-06485-9
M3 - Journal article
AN - SCOPUS:85181742022
SN - 0885-6125
VL - 113
SP - 3849
EP - 3868
JO - Machine Learning
JF - Machine Learning
IS - 6
ER -