Bandwidth-Aware and Overlap-Weighted Compression for Communication-Efficient Federated Learning

Zichen Tang, Junlin Huang, Rudan Yan, Yuxin Wang, Zhenheng Tang*, Shaohuai Shi, Amelie Chi Zhou, Xiaowen Chu*

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

1 Citation (Scopus)

Abstract

Current data compression methods, such as sparsification in Federated Averaging (FedAvg), effectively enhance the communication efficiency of Federated Learning (FL). However, these methods encounter challenges such as the straggler problem and diminished model performance due to heterogeneous bandwidth and non-IID (Independently and Identically Distributed) data. To address these issues, we introduce a bandwidth-aware compression framework for FL, aimed at improving communication efficiency while mitigating the problems associated with non-IID data. First, our strategy dynamically adjusts compression ratios according to bandwidth, enabling clients to upload their models at a close pace, thus exploiting the otherwise wasted time to transmit more data. Second, we identify the non-overlapped pattern of retained parameters after compression, which results in diminished client update signals due to uniformly averaged weights. Based on this finding, we propose a parameter mask to adjust the client-averaging coefficients at the parameter level, thereby more closely approximating the original updates, and improving the training convergence under heterogeneous environments. Our evaluations reveal that our method significantly boosts model accuracy, with a maximum improvement of 13% over the uncompressed FedAvg. Moreover, it achieves a 3.37 × speedup in reaching the target accuracy compared to FedAvg with a Top-K compressor, demonstrating its effectiveness in accelerating convergence with compression. The integration of common compression techniques into our framework further establishes its potential as a versatile foundation for future cross-device, communication-efficient FL research, addressing critical challenges in FL and advancing the field of distributed machine learning.

Original languageEnglish
Title of host publication53rd International Conference on Parallel Processing, ICPP 2024 - Main Conference Proceedings
PublisherAssociation for Computing Machinery (ACM)
Pages866-875
Number of pages10
ISBN (Print)9798400717932
DOIs
Publication statusPublished - 12 Aug 2024
Event53rd International Conference on Parallel Processing, ICPP 2024 - Gotland, Sweden
Duration: 12 Aug 202415 Aug 2024
https://icpp2024.org/index.php?option=com_content&view=featured&Itemid=101
https://dl.acm.org/doi/proceedings/10.1145/3673038

Publication series

NameACM International Conference Proceeding Series
NameProceedings of International Conference on Parallel Processing, ICPP

Conference

Conference53rd International Conference on Parallel Processing, ICPP 2024
Country/TerritorySweden
CityGotland
Period12/08/2415/08/24
Internet address

User-Defined Keywords

  • Communication Efficiency
  • Data Heterogeneity
  • Federated Learning

Fingerprint

Dive into the research topics of 'Bandwidth-Aware and Overlap-Weighted Compression for Communication-Efficient Federated Learning'. Together they form a unique fingerprint.

Cite this