Performance modeling and evaluation of distributed deep learning frameworks on GPUs

Shaohuai Shi, Qiang WANG, Xiaowen CHU

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

67 Citations (Scopus)

Abstract

Deep learning frameworks have been widely deployed on GPU servers for deep learning applications in both academia and industry. In training deep neural networks (DNNs), there are many standard processes or algorithms, such as convolution and stochastic gradient descent (SGD), but the running performance of different frameworks might be different even running the same deep model on the same GPU hardware. In this study, we evaluate the running performance of four state-of-The-Art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet, and TensorFlow) over single-GPU, multi-GPU, and multi-node environments. We first build performance models of standard processes in training DNNs with SGD, and then we benchmark the running performance of these frameworks with three popular convolutional neural networks (i.e., AlexNet, GoogleNet and ResNet-50), after that, we analyze what factors that result in the performance gap among these four frameworks. Through both analytical and experimental analysis, we identify bottlenecks and overheads which could be further optimized. The main contribution is that the proposed performance models and the analysis provide further optimization directions in both algorithmic design and system configuration.

Original languageEnglish
Title of host publicationProceedings - IEEE 16th International Conference on Dependable, Autonomic and Secure Computing, IEEE 16th International Conference on Pervasive Intelligence and Computing, IEEE 4th International Conference on Big Data Intelligence and Computing and IEEE 3rd Cyber Science and Technology Congress, DASC-PICom-DataCom-CyberSciTec 2018
PublisherIEEE
Pages943-948
Number of pages6
ISBN (Electronic)9781538675182
DOIs
Publication statusPublished - 26 Oct 2018
Event16th IEEE International Conference on Dependable, Autonomic and Secure Computing, IEEE 16th International Conference on Pervasive Intelligence and Computing, IEEE 4th International Conference on Big Data Intelligence and Computing and IEEE 3rd Cyber Science and Technology Congress, DASC-PICom-DataCom-CyberSciTec 2018 - Athens, Greece
Duration: 12 Aug 201815 Aug 2018
https://ieeexplore.ieee.org/xpl/conhome/8511011/proceeding (Conference proceedings)
https://dblp.org/db/conf/dasc/dasc2018.html (Conference proceedings)

Publication series

NameProceedings - IEEE 16th International Conference on Dependable, Autonomic and Secure Computing, IEEE 16th International Conference on Pervasive Intelligence and Computing, IEEE 4th International Conference on Big Data Intelligence and Computing and IEEE 3rd Cyber Science and Technology Congress, DASC-PICom-DataCom-CyberSciTec 2018

Conference

Conference16th IEEE International Conference on Dependable, Autonomic and Secure Computing, IEEE 16th International Conference on Pervasive Intelligence and Computing, IEEE 4th International Conference on Big Data Intelligence and Computing and IEEE 3rd Cyber Science and Technology Congress, DASC-PICom-DataCom-CyberSciTec 2018
Country/TerritoryGreece
CityAthens
Period12/08/1815/08/18
Internet address

Scopus Subject Areas

  • Computer Networks and Communications
  • Information Systems
  • Artificial Intelligence
  • Information Systems and Management
  • Safety, Risk, Reliability and Quality
  • Control and Optimization

User-Defined Keywords

  • Convolutional Neural Networks
  • Deep Learning
  • Deep Learning Frameworks
  • Distributed SGD
  • GPU

Fingerprint

Dive into the research topics of 'Performance modeling and evaluation of distributed deep learning frameworks on GPUs'. Together they form a unique fingerprint.

Cite this