Reliable Adversarial Distillation with Unreliable Teachers

Jianing Zhu, Jiangchao Yao, Bo Han*, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang XU, Hongxia Yang

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

10 Citations (Scopus)

Abstract

In ordinary distillation, student networks are trained with soft labels (SLs) given by pretrained teacher networks, and students are expected to improve upon teachers since SLs are stronger supervision than the original hard labels. However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students. Therefore, in this paper, we propose reliable introspective adversarial distillation (IAD) where students partially instead of fully trust their teachers. Specifically, IAD distinguishes between three cases given a query of a natural data (ND) and the corresponding adversarial data (AD): (a) if a teacher is good at AD, its SL is fully trusted; (b) if a teacher is good at ND but not AD, its SL is partially trusted and the student also takes its own SL into account; (c) otherwise, the student only relies on its own SL. Experiments demonstrate the effectiveness of IAD for improving upon teachers in terms of adversarial robustness.
Original languageEnglish
Title of host publicationProceedings of Tenth International Conference on Learning Representations, ICLR 2022
PublisherInternational Conference on Learning Representations
Number of pages15
DOIs
Publication statusPublished - 25 Apr 2022
EventThe Tenth International Conference on Learning Representations, ICLR 2022 - Virtual
Duration: 25 Apr 202229 Apr 2022
https://iclr.cc/Conferences/2022
https://openreview.net/group?id=ICLR.cc/2022/Conference

Conference

ConferenceThe Tenth International Conference on Learning Representations, ICLR 2022
Period25/04/2229/04/22
Internet address

Scopus Subject Areas

  • Language and Linguistics
  • Computer Science Applications
  • Education
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Reliable Adversarial Distillation with Unreliable Teachers'. Together they form a unique fingerprint.

Cite this