Exploiting simultaneous communications to accelerate data parallel distributed deep learning

Shaohuai Shi, Xiaowen Chu*, Bo Li

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

21 Citations (Scopus)

Abstract

Synchronous stochastic gradient descent (S-SGD) with data parallelism is widely used for training deep learning (DL) models in distributed systems. A pipelined schedule of the computing and communication tasks of a DL training job is an effective scheme to hide some communication costs. In such pipelined S-SGD, tensor fusion (i.e., merging some consecutive layers' gradients for a single communication) is a key ingredient to improve communication efficiency. However, existing tensor fusion techniques schedule the communication tasks sequentially, which overlooks their independence nature. In this paper, we expand the design space of scheduling by exploiting simultaneous All-Reduce communications. Through theoretical analysis and experiments, we show that simultaneous All-Reduce communications can effectively improve the communication efficiency of small tensors. We formulate an optimization problem of minimizing the training iteration time, in which both tensor fusion and simultaneous communications are allowed. We develop an efficient optimal scheduling solution and implement the distributed training algorithm ASC-WFBP with Horovod and PyTorch. We conduct real-world experiments on an 8-node GPU cluster of 32 GPUs with 10Gbps Ethernet. Experimental results on four modern DNNs show that ASC-WFBP can achieve about 1.09 × -2.48× speedup over the baseline without tensor fusion, and 1.15× -1.35× speedup over the state-of-the-art tensor fusion solution.

Original languageEnglish
Title of host publicationIEEE INFOCOM 2021 - IEEE Conference on Computer Communications
PublisherIEEE
Pages1-10
Number of pages10
ISBN (Electronic)9781665403252
ISBN (Print)9781665431316
DOIs
Publication statusPublished - 10 May 2021
Event40th IEEE International Conference on Computer Communications, IEEE INFOCOM 2021 - Vancouver, BC, Canada
Duration: 10 May 202113 May 2021
https://infocom2021.ieee-infocom.org/ (Conference website)
https://ieeexplore.ieee.org/xpl/conhome/9488422/proceeding (Conference proceedings)

Publication series

NameProceedings of IEEE Conference on Computer Communications
Volume2021-May
ISSN (Print)0743-166X
ISSN (Electronic)2641-9874

Conference

Conference40th IEEE International Conference on Computer Communications, IEEE INFOCOM 2021
Country/TerritoryCanada
CityVancouver, BC
Period10/05/2113/05/21
Internet address

Scopus Subject Areas

  • Computer Science(all)
  • Electrical and Electronic Engineering

User-Defined Keywords

  • Communication-Efficient
  • Distributed Deep Learning
  • Simultaneous Communications

Fingerprint

Dive into the research topics of 'Exploiting simultaneous communications to accelerate data parallel distributed deep learning'. Together they form a unique fingerprint.

Cite this