Abstract
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST). Classic network architectures (NAs) are generally worse than searched NA in ST, which should be the same in AT. In this paper, we argue that NA and AT cannot be handled independently, since given a dataset, the optimal NA in ST would be no longer optimal in AT. That being said, AT is time-consuming itself; if we directly search NAs in AT over large search spaces, the computation will be practically infeasible. Thus, we propose diverse-structured network (DS-Net), to significantly reduce the size of the search space: instead of low-level operations, we only consider predefined atomic blocks, where an atomic block is a time-tested building block like the residual block. There are only a few atomic blocks and thus we can weight all atomic blocks rather than find the best one in a searched block of DS-Net, which is an essential tradeoff between exploring diverse structures and exploiting the best structures. Empirical results demonstrate the advantages of DS-Net, i.e., weighting the atomic blocks.
Original language | English |
---|---|
Title of host publication | Proceedings of the 38th International Conference on Machine Learning (ICML 2021) |
Editors | Marina Meila, Tong Zhang |
Publisher | ML Research Press |
Pages | 2880-2891 |
Number of pages | 12 |
Publication status | Published - 18 Jul 2021 |
Event | 38th International Conference on Machine Learning, ICML 2021 - Virtual Duration: 18 Jul 2021 → 24 Jul 2021 https://icml.cc/virtual/2021/index.html https://icml.cc/Conferences/2021 https://proceedings.mlr.press/v139/ |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 139 |
ISSN (Print) | 2640-3498 |
Conference
Conference | 38th International Conference on Machine Learning, ICML 2021 |
---|---|
Period | 18/07/21 → 24/07/21 |
Internet address |