TY - GEN
T1 - Benchmarking the Performance and Energy Efficiency of AI Accelerators for AI Training
AU - Wang, Yuxin
AU - Wang, Qiang
AU - Shi, Shaohuai
AU - He, Xin
AU - Tang, Zhenheng
AU - Zhao, Kaiyong
AU - Chu, Xiaowen
N1 - Publisher Copyright:
© 2020 IEEE.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2020/5
Y1 - 2020/5
N2 - Deep learning has become widely used in complex AI applications. Yet, training a deep neural network (DNNs) model requires a considerable amount of calculations, long running time, and much energy. Nowadays, many-core AI accelerators (e.g., GPUs and TPUs) are designed to improve the performance of AI training. However, processors from different vendors perform dissimilarly in terms of performance and energy consumption. To investigate the differences among several popular off-the-shelf processors (i.e., Intel CPU, NVIDIA GPU, AMD GPU, and Google TPU) in training DNNs, we carry out a comprehensive empirical study on the performance and energy efficiency of these processors1 by benchmarking a representative set of deep learning workloads, including computation-intensive operations, classical convolutional neural networks (CNNs), recurrent neural networks (LSTM), Deep Speech 2, and Transformer. Different from the existing end-to-end benchmarks which only present the training time, We try to investigate the impact of hardware, vendor's software library, and deep learning framework on the performance and energy consumption of AI training. Our evaluation methods and results not only provide an informative guide for end users to select proper AI accelerators, but also expose some opportunities for the hardware vendors to improve their software library.
AB - Deep learning has become widely used in complex AI applications. Yet, training a deep neural network (DNNs) model requires a considerable amount of calculations, long running time, and much energy. Nowadays, many-core AI accelerators (e.g., GPUs and TPUs) are designed to improve the performance of AI training. However, processors from different vendors perform dissimilarly in terms of performance and energy consumption. To investigate the differences among several popular off-the-shelf processors (i.e., Intel CPU, NVIDIA GPU, AMD GPU, and Google TPU) in training DNNs, we carry out a comprehensive empirical study on the performance and energy efficiency of these processors1 by benchmarking a representative set of deep learning workloads, including computation-intensive operations, classical convolutional neural networks (CNNs), recurrent neural networks (LSTM), Deep Speech 2, and Transformer. Different from the existing end-to-end benchmarks which only present the training time, We try to investigate the impact of hardware, vendor's software library, and deep learning framework on the performance and energy consumption of AI training. Our evaluation methods and results not only provide an informative guide for end users to select proper AI accelerators, but also expose some opportunities for the hardware vendors to improve their software library.
KW - AI Accelerator
KW - Computation-intensive Operations
KW - Convolution Neural Networks
KW - CPU
KW - Deep Learning
KW - Deep Speech 2
KW - GPU
KW - Recurrent Neural Networks
KW - TPU
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=85089102592&partnerID=8YFLogxK
U2 - 10.1109/CCGrid49817.2020.00-15
DO - 10.1109/CCGrid49817.2020.00-15
M3 - Conference proceeding
AN - SCOPUS:85089102592
T3 - Proceedings - 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, CCGRID 2020
SP - 744
EP - 751
BT - Proceedings - 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, CCGRID 2020
A2 - Lefevre, Laurent
A2 - Varela, Carlos A.
A2 - Pallis, George
A2 - Toosi, Adel N.
A2 - Rana, Omer
A2 - Buyya, Rajkumar
PB - IEEE
T2 - 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, CCGRID 2020
Y2 - 11 May 2020 through 14 May 2020
ER -