TY - JOUR
T1 - Slack Federated Adversarial Training
AU - Zhu, Jianing
AU - Han, Bo
AU - Yao, Jiangchao
AU - Yao, Quanming
AU - Liu, Tongliang
AU - Xu, Jianliang
PY - 2025/12/22
Y1 - 2025/12/22
N2 - Security and privacy concerns in real-world applications have led to the development of adversarially robust federated models. Previous works mainly target overcoming the adaptability constraints regarding communication and computation costs. However, the straightforward combination of adversarial training and federated learning might lead to undesired robust accuracy degradation emerging at later training stages. We reveal that the attribution behind this phenomenon is that the generated adversarial data could exacerbate the data heterogeneity among local clients, making the wrapped federated learning perform poorly. To deal with this problem, we introduce an α-slack mechanism to relax the original learning objective of federated adversarial training, and propose a novel framework called Slack Federated Adversarial Training (SFAT) to combat the intensified heterogeneity. By assigning the client-wise slack during aggregation, SFAT realizes a weighted aggregation that alleviates the optimization bias induced by the local adversarial generation. We further extend to a more general setting, permitting both clients trained by standard/adversarial training in a unified framework, and propose SFAT* with a hierarchical aggregation schema for this scenario. Theoretically, we analyze the convergence of our method to properly relax the learning objective. Experimentally, we verify the rationality and effectiveness of our methods on various benchmarked and real-world datasets with different adversarial training and federated optimization methods.
AB - Security and privacy concerns in real-world applications have led to the development of adversarially robust federated models. Previous works mainly target overcoming the adaptability constraints regarding communication and computation costs. However, the straightforward combination of adversarial training and federated learning might lead to undesired robust accuracy degradation emerging at later training stages. We reveal that the attribution behind this phenomenon is that the generated adversarial data could exacerbate the data heterogeneity among local clients, making the wrapped federated learning perform poorly. To deal with this problem, we introduce an α-slack mechanism to relax the original learning objective of federated adversarial training, and propose a novel framework called Slack Federated Adversarial Training (SFAT) to combat the intensified heterogeneity. By assigning the client-wise slack during aggregation, SFAT realizes a weighted aggregation that alleviates the optimization bias induced by the local adversarial generation. We further extend to a more general setting, permitting both clients trained by standard/adversarial training in a unified framework, and propose SFAT* with a hierarchical aggregation schema for this scenario. Theoretically, we analyze the convergence of our method to properly relax the learning objective. Experimentally, we verify the rationality and effectiveness of our methods on various benchmarked and real-world datasets with different adversarial training and federated optimization methods.
KW - Adversarial Robustness
KW - Exacerbated Heterogeneity
KW - Federated Learning
UR - https://www.scopus.com/pages/publications/105025914582
U2 - 10.1109/TPAMI.2025.3646649
DO - 10.1109/TPAMI.2025.3646649
M3 - Journal article
AN - SCOPUS:105025914582
SN - 0162-8828
SP - 1
EP - 18
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -