Benchmarking state-of-the-art deep learning software tools

Shaohuai Shi, Qiang WANG, Pengfei Xu, Xiaowen CHU

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

173 Citations (Scopus)

Abstract

Deep learning has been shown as a successful machine learning method for a variety of tasks, and its popularity results in numerous open-source deep learning software tools coming to public. Training a deep network is usually a very time-consuming process. To address the huge computational challenge in deep learning, many tools exploit hardware features such as multi-core CPUs and many-core GPUs to shorten the training and inference time. However, different tools exhibit different features and running performance when they train different types of deep networks on different hardware platforms, making it difficult for end users to select an appropriate pair of software and hardware. In this paper, we present our attempt to benchmark several state-of-the-art GPU-accelerated deep learning software tools, including Caffe, CNTK, TensorFlow, and Torch. We focus on evaluating the running time performance (i.e., speed) of these tools with three popular types of neural networks on two representative CPU platforms and three representative GPU platforms. Our contribution is two-fold. First, for end users of deep learning software tools, our benchmarking results can serve as a reference to selecting appropriate hardware platforms and software tools. Second, for developers of deep learning software tools, our in-depth analysis points out possible future directions to further optimize the running performance.

Original languageEnglish
Title of host publicationProceedings - 2016 7th International Conference on Cloud Computing and Big Data, CCBD 2016
PublisherIEEE
Pages99-104
Number of pages6
ISBN (Electronic)9781509035557
DOIs
Publication statusPublished - 13 Jul 2017
Event7th International Conference on Cloud Computing and Big Data, CCBD 2016 - Taipa, Macau, China
Duration: 16 Nov 201618 Nov 2016

Conference

Conference7th International Conference on Cloud Computing and Big Data, CCBD 2016
Country/TerritoryChina
CityTaipa, Macau
Period16/11/1618/11/16

Scopus Subject Areas

  • Computer Networks and Communications
  • Hardware and Architecture
  • Information Systems
  • Computer Science Applications

User-Defined Keywords

  • Convolutional Neural Networks
  • Deep Learning
  • Feed-forward Neural Networks
  • GPU
  • Recurrent Neural Networks

Fingerprint

Dive into the research topics of 'Benchmarking state-of-the-art deep learning software tools'. Together they form a unique fingerprint.

Cite this