Layer-wise adaptive gradient sparsification for distributed deep learning with convergence guarantees

Shaohuai Shi, Zhenheng Tang, Qiang Wang, Kaiyong Zhao, Xiaowen Chu

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

7 Citations (Scopus)

Abstract

To reduce the long training time of large deep neural network (DNN) models, distributed synchronous stochastic gradient descent (S-SGD) is commonly used on a cluster of workers. However, the speedup brought by multiple workers is limited by the communication overhead. Two approaches, namely pipelining and gradient sparsification, have been separately proposed to alleviate the impact of communication overheads. Yet, the gradient sparsification methods can only initiate the communication after the backpropagation, and hence miss the pipelining opportunity. In this paper, we propose a new distributed optimization method named LAGS-SGD, which combines S-SGD with a novel layer-wise adaptive gradient sparsification (LAGS) scheme. In LAGS-SGD, every worker selects a small set of 'significant' gradients from each layer independently whose size can be adaptive to the communication-to-computation ratio of that layer. The layer-wise nature of LAGS-SGD opens the opportunity of overlapping communications with computations, while the adaptive nature of LAGS-SGD makes it flexible to control the communication time. We prove that LAGS-SGD has convergence guarantees and it has the same order of convergence rate as vanilla S-SGD under a weak analytical assumption. Extensive experiments are conducted to verify the analytical assumption and the convergence performance of LAGS-SGD. Experimental results on a 16-GPU cluster show that LAGS-SGD outperforms the original S-SGD and existing sparsified S-SGD without losing obvious model accuracy.

Original languageEnglish
Title of host publicationECAI 2020 - 24th European Conference on Artificial Intelligence, including 10th Conference on Prestigious Applications of Artificial Intelligence, PAIS 2020 - Proceedings
EditorsGiuseppe De Giacomo, Alejandro Catala, Bistra Dilkina, Michela Milano, Senen Barro, Alberto Bugarin, Jerome Lang
PublisherIOS Press BV
Pages1467-1474
Number of pages8
ISBN (Electronic)9781643681009
DOIs
Publication statusPublished - 24 Aug 2020
Event24th European Conference on Artificial Intelligence, ECAI 2020, including 10th Conference on Prestigious Applications of Artificial Intelligence, PAIS 2020 - Santiago de Compostela, Online, Spain
Duration: 29 Aug 20208 Sept 2020

Publication series

NameFrontiers in Artificial Intelligence and Applications
Volume325
ISSN (Print)0922-6389
ISSN (Electronic)1879-8314

Conference

Conference24th European Conference on Artificial Intelligence, ECAI 2020, including 10th Conference on Prestigious Applications of Artificial Intelligence, PAIS 2020
Country/TerritorySpain
CitySantiago de Compostela, Online
Period29/08/208/09/20

Scopus Subject Areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Layer-wise adaptive gradient sparsification for distributed deep learning with convergence guarantees'. Together they form a unique fingerprint.

Cite this