TY - JOUR
T1 - MG-WFBP
T2 - Merging gradients wisely for efficient communication in distributed deep learning
AU - Shi, Shaohuai
AU - Chu, Xiaowen
AU - Li, Bo
N1 - Funding Information:
This work was supported in part by the Hong Kong RGC GRF grants under the contracts HKBU 12200418, HKUST 16206417, and 16207818, as well as an RGC CRF grant under the contract C7036-15G. The authors would also like to thank NVIDIA for providing the GPU clusters for experiments.
PY - 2021/8/1
Y1 - 2021/8/1
N2 - Distributed synchronous stochastic gradient descent has been widely used to train deep neural networks (DNNs) on computer clusters. With the increase of computational power, network communications generally limit the system scalability. Wait-free backpropagation (WFBP) is a popular solution to overlap communications with computations during the training process. In this article, we observe that many DNNs have a large number of layers with only a small amount of data to be communicated at each layer in distributed training, which could make WFBP inefficient. Based on the fact that merging some short communication tasks into a single one can reduce the overall communication time, we formulate an optimization problem to minimize the training time in pipelining communications and computations. We derive an optimal solution that can be solved efficiently without affecting the training performance. We then apply the solution to propose a distributed training algorithm named merged-gradient WFBP (MG-WFBP) and implement it in two platforms Caffe and PyTorch. Extensive experiments in three GPU clusters are conducted to verify the effectiveness of MG-WFBP. We further exploit trace-based simulations of 4 to 2048 GPUs to explore the potential scaling efficiency of MG-WFBP. Experimental results show that MG-WFBP achieves much better scaling performance than existing methods.
AB - Distributed synchronous stochastic gradient descent has been widely used to train deep neural networks (DNNs) on computer clusters. With the increase of computational power, network communications generally limit the system scalability. Wait-free backpropagation (WFBP) is a popular solution to overlap communications with computations during the training process. In this article, we observe that many DNNs have a large number of layers with only a small amount of data to be communicated at each layer in distributed training, which could make WFBP inefficient. Based on the fact that merging some short communication tasks into a single one can reduce the overall communication time, we formulate an optimization problem to minimize the training time in pipelining communications and computations. We derive an optimal solution that can be solved efficiently without affecting the training performance. We then apply the solution to propose a distributed training algorithm named merged-gradient WFBP (MG-WFBP) and implement it in two platforms Caffe and PyTorch. Extensive experiments in three GPU clusters are conducted to verify the effectiveness of MG-WFBP. We further exploit trace-based simulations of 4 to 2048 GPUs to explore the potential scaling efficiency of MG-WFBP. Experimental results show that MG-WFBP achieves much better scaling performance than existing methods.
KW - Deep learning
KW - distributed stochastic gradient descent
KW - GPU
KW - gradient communication
KW - merged-gradient
UR - http://www.scopus.com/inward/record.url?scp=85099726349&partnerID=8YFLogxK
U2 - 10.1109/TPDS.2021.3052862
DO - 10.1109/TPDS.2021.3052862
M3 - Journal article
AN - SCOPUS:85099726349
SN - 1045-9219
VL - 32
SP - 1903
EP - 1917
JO - IEEE Transactions on Parallel and Distributed Systems
JF - IEEE Transactions on Parallel and Distributed Systems
IS - 8
ER -