Attacks which do not kill training make adversarial learning stronger

Jingfeng Zhang*, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

141 Citations (Scopus)

Abstract

Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel formulation of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial data (i.e., friendly adversarial data) minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively adversarial robustness can indeed be achieved without compromising the natural generalization.

Original languageEnglish
Title of host publicationProceedings of the 37th International Conference on Machine Learning, ICML 2020
EditorsHal Daumé III, Aarti Singh
PublisherML Research Press
Pages11214-11224
Number of pages11
ISBN (Electronic)9781713821120
Publication statusPublished - Jul 2020
Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Duration: 13 Jul 202018 Jul 2020
https://proceedings.mlr.press/v119/

Publication series

NameProceedings of Machine Learning Research
Volume119
ISSN (Print)2640-3498

Conference

Conference37th International Conference on Machine Learning, ICML 2020
Period13/07/2018/07/20
Internet address

Scopus Subject Areas

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Attacks which do not kill training make adversarial learning stronger'. Together they form a unique fingerprint.

Cite this