MG-WFBP: Merging gradients wisely for efficient communication in distributed deep learning

Shaohuai Shi, Xiaowen Chu*, Bo Li

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

21 Citations (Scopus)

Abstract

Distributed synchronous stochastic gradient descent has been widely used to train deep neural networks (DNNs) on computer clusters. With the increase of computational power, network communications generally limit the system scalability. Wait-free backpropagation (WFBP) is a popular solution to overlap communications with computations during the training process. In this article, we observe that many DNNs have a large number of layers with only a small amount of data to be communicated at each layer in distributed training, which could make WFBP inefficient. Based on the fact that merging some short communication tasks into a single one can reduce the overall communication time, we formulate an optimization problem to minimize the training time in pipelining communications and computations. We derive an optimal solution that can be solved efficiently without affecting the training performance. We then apply the solution to propose a distributed training algorithm named merged-gradient WFBP (MG-WFBP) and implement it in two platforms Caffe and PyTorch. Extensive experiments in three GPU clusters are conducted to verify the effectiveness of MG-WFBP. We further exploit trace-based simulations of 4 to 2048 GPUs to explore the potential scaling efficiency of MG-WFBP. Experimental results show that MG-WFBP achieves much better scaling performance than existing methods.

Original languageEnglish
Pages (from-to)1903-1917
Number of pages15
JournalIEEE Transactions on Parallel and Distributed Systems
Volume32
Issue number8
Early online date19 Jan 2021
DOIs
Publication statusPublished - 1 Aug 2021

Scopus Subject Areas

  • Signal Processing
  • Hardware and Architecture
  • Computational Theory and Mathematics

User-Defined Keywords

  • Deep learning
  • distributed stochastic gradient descent
  • GPU
  • gradient communication
  • merged-gradient

Fingerprint

Dive into the research topics of 'MG-WFBP: Merging gradients wisely for efficient communication in distributed deep learning'. Together they form a unique fingerprint.

Cite this