MG-WFBP: Efficient Data Communication for Distributed Synchronous SGD Algorithms

Shaohuai Shi, Xiaowen CHU, Bo Li

Research output: Chapter in book/report/conference proceedingConference contributionpeer-review

20 Citations (Scopus)

Abstract

Distributed synchronous stochastic gradient descent has been widely used to train deep neural networks on computer clusters. With the increase of computational power, network communications have become one limiting factor on the system scalability. In this paper, we observe that many deep neural networks have a large number of layers with only a small amount of data to be communicated. Based on the fact that merging some short communication tasks into a single one may reduce the overall communication time, we formulate an optimization problem to minimize the training iteration time. We develop an optimal solution named merged-gradient wait-free backpropagation (MG-WFBP) and implement it in our open-source deep learning platform B-Caffe. Our experimental results on an 8-node GPU cluster with 10GbE interconnect and trace-based simulation results on a 64-node cluster both show that the MG-WFBP algorithm can achieve much better scaling efficiency than existing methods WFBP and SyncEASGD.

Original languageEnglish
Title of host publicationINFOCOM 2019 - IEEE Conference on Computer Communications
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages172-180
Number of pages9
ISBN (Electronic)9781728105154
DOIs
Publication statusPublished - Apr 2019
Event2019 IEEE Conference on Computer Communications, INFOCOM 2019 - Paris, France
Duration: 29 Apr 20192 May 2019

Publication series

NameProceedings - IEEE INFOCOM
Volume2019-April
ISSN (Print)0743-166X

Conference

Conference2019 IEEE Conference on Computer Communications, INFOCOM 2019
Country/TerritoryFrance
CityParis
Period29/04/192/05/19

Scopus Subject Areas

  • Computer Science(all)
  • Electrical and Electronic Engineering

User-Defined Keywords

  • Deep Learning
  • Distributed Stochastic Gradient Descent
  • GPU
  • Gradient Communication
  • Merged-gradient

Fingerprint

Dive into the research topics of 'MG-WFBP: Efficient Data Communication for Distributed Synchronous SGD Algorithms'. Together they form a unique fingerprint.

Cite this