Federated Learning with Extremely Noisy Clients via Negative Distillation

Yang Lu, Lin Chen, Yonggang Zhang, Yiliang Zhang, Bo Han, Yiu Ming Cheung, Hanzi Wang*

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

Federated learning (FL) has shown remarkable success in cooperatively training deep models, while typically struggling with noisy labels. Advanced works propose to tackle label noise by a re-weighting strategy with a strong assumption, i.e., mild label noise. However, it may be violated in many real-world FL scenarios because of highly contaminated clients, resulting in extreme noise ratios, e.g., >90%. To tackle extremely noisy clients, we study the robustness of the re-weighting strategy, showing a pessimistic conclusion: minimizing the weight of clients trained over noisy data outperforms re-weighting strategies. To leverage models trained on noisy clients, we propose a novel approach, called negative distillation (FedNed). FedNed first identifies noisy clients and employs rather than discards the noisy clients in a knowledge distillation manner. In particular, clients identified as noisy ones are required to train models using noisy labels and pseudo-labels obtained by global models. The model trained on noisy labels serves as a ‘bad teacher’ in knowledge distillation, aiming to decrease the risk of providing incorrect information. Meanwhile, the model trained on pseudo-labels is involved in model aggregation if not identified as a noisy client. Consequently, through pseudo-labeling, FedNed gradually increases the trustworthiness of models trained on noisy clients, while leveraging all clients for model aggregation through negative distillation. To verify the efficacy of FedNed, we conduct extensive experiments under various settings, demonstrating that FedNed can consistently outperform baselines and achieve state-of-the-art performance.

Original languageEnglish
Title of host publicationProceedings of the 38th AAAI Conference on Artificial Intelligence
EditorsMichael Wooldridge, Jennifer Dy, Sriraam Natarajan
PublisherAAAI press
Pages14184-14192
Number of pages9
Volume38
Edition13
ISBN (Print)1577358872 , 9781577358879
DOIs
Publication statusPublished - 25 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024
https://ojs.aaai.org/index.php/AAAI/issue/archive (Conference proceeding)

Publication series

NameProceedings of the AAAI Conference on Artificial Intelligence
PublisherAAAA Press
Number13
Volume38
ISSN (Print)2159-5399
ISSN (Electronic)2374-3468

Conference

Conference38th AAAI Conference on Artificial Intelligence, AAAI 2024
Country/TerritoryCanada
CityVancouver
Period20/02/2427/02/24
Internet address

Scopus Subject Areas

  • Artificial Intelligence

User-Defined Keywords

  • ML: Distributed Machine Learning & Federated Learning
  • ML: Deep Learning Algorithms
  • PEAI: Safety, Robustness & Trustworthiness

Fingerprint

Dive into the research topics of 'Federated Learning with Extremely Noisy Clients via Negative Distillation'. Together they form a unique fingerprint.

Cite this