Learning to Incentivize: Convergence-Guaranteed Federated Learning Via Client Quality Discovery

  • Jiajun Wang
  • , Jianxiong Guo
  • , Juncheng Wang
  • , Xingjian Ding*
  • , Deying Li
  • , Weili Wu
  • *Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Federated learning (FL) is a privacy-preserving distributed machine learning framework where multiple devices collaborate with the assistance of an aggregator. However, the limitations of aggregator communication result in only a portion of clients with high data quality being selected to participate in FL, but the quality of clients’ data cannot be evaluated without access to the original data. Most existing methods for selecting clients employ a data quality metric with empirically defined scores, which may select clients with high non-ID degrees, thereby reducing the accuracy and freshness of the model. Furthermore, due to the unknown quality of clients’ data, current incentive mechanisms lack FL convergence guarantees, which prevent the client behavior from improving the global model accuracy. To address these issues, in this paper, we propose using the gradient difference as a metric for the quality of clients’ data, which can quantify the non-IID degree and contribution potential of each client. We formulate a client selection problem using the Combinatorial Multi-Armed Bandit (CMAB) model and design an effective selection strategy, improving the worst-case regret proof to provide a theoretical guarantee for it. Based on these results, we develop an incentive mechanism by the FL convergence analysis, quantifying the utility functions of the aggregator and clients, and modeling their interaction as a two-stage Stackelberg game. For the non-convex utility function, our method establishes the existence and uniqueness of the Stackelberg equilibrium, thereby enabling the determination of the optimal strategy for maximizing the utility of all participants. Finally, extensive simulation experiments on real-world datasets demonstrate the effectiveness of our proposed method compared to state-of-the-art approaches.

Original languageEnglish
Number of pages17
JournalIEEE Transactions on Mobile Computing
DOIs
Publication statusE-pub ahead of print - 4 Dec 2025

User-Defined Keywords

  • Client selection
  • convergence
  • data quality metric
  • federated learning
  • incentive mechanism
  • multi-armed bandit
  • stackelberg game

Fingerprint

Dive into the research topics of 'Learning to Incentivize: Convergence-Guaranteed Federated Learning Via Client Quality Discovery'. Together they form a unique fingerprint.

Cite this