TY - JOUR
T1 - Improve Knowledge Distillation via Label Revision and Data Selection
AU - Lan, Weichao
AU - Cheung, Yiu Ming
AU - Xu, Qing
AU - Liu, Buhua
AU - Hu, Zhikai
AU - Li, Mengke
AU - Chen, Zhenghua
N1 - This work was supported in part by the NSFC / Research Grants Council (RGC) Joint Research Scheme under the grant: N HKBU214/21, the General Research Fund of RGC under the grants: 12201323 and 12202924, the RGC Senior Research Fellow Scheme with the grant: SRFS2324-2S02, Seed Funding for Collaborative Research Grants of Hong Kong Baptist University with the grant: RC-SFCRG/23-24/R2/SCI/10, and National Key Laboratory of Radar Signal Processing under Grant: JKW202403.
Publisher Copyright:
© 2025 IEEE.
PY - 2025/4/11
Y1 - 2025/4/11
N2 - Knowledge distillation (KD) transferring knowledge from a large teacher model to a lightweight student one has received great attention in deep model compression. In addition to the supervision of ground truth, the vanilla KD method regards the predictions of the teacher as soft labels to supervise the training of the student model. Based on vanilla KD, various approaches have been developed to further improve the performance of the student model. However, few of these previous methods have considered the reliability of the supervision from teacher models. Supervision from erroneous predictions may mislead the training of the student model. This paper therefore proposes to tackle this problem from two aspects: Label Revision to rectify the incorrect supervision and Data Selection to select appropriate samples for distillation to reduce the impact of erroneous supervision. In the former, we propose to rectify the teacher's inaccurate predictions using the ground truth. In the latter, we introduce a data selection technique to choose suitable training samples to be supervised by the teacher, thereby reducing the impact of incorrect predictions to some extent. Experiment results demonstrate the effectiveness of the proposed method, which can be further combined with other distillation approaches to enhance their performance.
AB - Knowledge distillation (KD) transferring knowledge from a large teacher model to a lightweight student one has received great attention in deep model compression. In addition to the supervision of ground truth, the vanilla KD method regards the predictions of the teacher as soft labels to supervise the training of the student model. Based on vanilla KD, various approaches have been developed to further improve the performance of the student model. However, few of these previous methods have considered the reliability of the supervision from teacher models. Supervision from erroneous predictions may mislead the training of the student model. This paper therefore proposes to tackle this problem from two aspects: Label Revision to rectify the incorrect supervision and Data Selection to select appropriate samples for distillation to reduce the impact of erroneous supervision. In the former, we propose to rectify the teacher's inaccurate predictions using the ground truth. In the latter, we introduce a data selection technique to choose suitable training samples to be supervised by the teacher, thereby reducing the impact of incorrect predictions to some extent. Experiment results demonstrate the effectiveness of the proposed method, which can be further combined with other distillation approaches to enhance their performance.
KW - Image Classification
KW - Knowledge Distillation
KW - Lightweight Model
UR - http://www.scopus.com/inward/record.url?scp=105002686863&partnerID=8YFLogxK
U2 - 10.1109/TCDS.2025.3559881
DO - 10.1109/TCDS.2025.3559881
M3 - Journal article
AN - SCOPUS:105002686863
SN - 2379-8920
JO - IEEE Transactions on Cognitive and Developmental Systems
JF - IEEE Transactions on Cognitive and Developmental Systems
ER -