A Quantitative Survey of Communication Optimizations in Distributed Deep Learning

Shaohuai Shi, Zhenheng Tang, Xiaowen Chu*, Chengjian Liu, Wei Wang, Bo Li

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

31 Citations (Scopus)

Abstract

Nowadays, large and complex deep learning (DL) models are increasingly trained in a distributed manner across multiple worker machines, in which extensive communications between workers pose serious scaling problems. In this article, we present a quantitative survey of communication optimization techniques for data parallel distributed DL. We first identify the major communication challenges and classify the existing solutions into three levels, namely the learning algorithm, the system architecture, and the network infrastructure. We present the state-of-the-art communication optimization techniques and conduct a comparative study of seven common lossless distributed DL methods on a 32-GPU cluster with 100Gb/s InfiniBand (IB). We show that the DL models with low model intensity (such as BERT and BERT-Large) are difficult to scale out even with the best available lossless algorithm over 100Gb/s IB; and the system architecture and scheduling algorithms have a critical impact on the scaling property. We conclude the article with discussions of open issues for further investigation.

Original languageEnglish
Pages (from-to)230-237
Number of pages8
JournalIEEE Network
Volume35
Issue number3
Early online date2 Dec 2020
DOIs
Publication statusPublished - May 2021

Scopus Subject Areas

  • Software
  • Information Systems
  • Hardware and Architecture
  • Computer Networks and Communications

User-Defined Keywords

  • Computational modeling
  • Data models
  • Distributed databases
  • Parallel processing
  • Task analysis
  • Tensors
  • Training

Fingerprint

Dive into the research topics of 'A Quantitative Survey of Communication Optimizations in Distributed Deep Learning'. Together they form a unique fingerprint.

Cite this