TY - GEN
T1 - The impact of GPU DVFS on the energy and performance of deep Learning
T2 - 10th ACM International Conference on Future Energy Systems, e-Energy 2019
AU - Tang, Zhenheng
AU - Wang, Yuxin
AU - WANG, Qiang
AU - CHU, Xiaowen
N1 - Funding Information:
The authors would like to thank the reviewers for their thorough and insightful comments and suggestions. The research was supported by Hong Kong RGC GRF grant HKBU 12200418.
PY - 2019/6/15
Y1 - 2019/6/15
N2 - Over the past years, great progress has been made in improving the computing power of general-purpose graphics processing units (GPGPUs), which facilitates the prosperity of deep neural networks (DNNs) in multiple fields like computer vision and natural language processing. A typical DNN training process repeatedly updates tens of millions of parameters, which not only requires huge computing resources but also consumes significant energy. In order to train DNNs in a more energy-efficient way, we empirically investigate the impact of GPU Dynamic Voltage and Frequency Scaling (DVFS) on the energy consumption and performance of deep learning. Our experiments cover a wide range of GPU architectures, DVFS settings, and DNN configurations. We observe that, compared to the default core frequency settings of three tested GPUs, the optimal core frequency can help conserve 8.7%~23.1% energy consumption for different DNN training cases. Regarding the inference, the benefits vary from 19.6%~26.4%. Our findings suggest that GPU DVFS has great potentials to help develop energy efficient DNN training/inference schemes.
AB - Over the past years, great progress has been made in improving the computing power of general-purpose graphics processing units (GPGPUs), which facilitates the prosperity of deep neural networks (DNNs) in multiple fields like computer vision and natural language processing. A typical DNN training process repeatedly updates tens of millions of parameters, which not only requires huge computing resources but also consumes significant energy. In order to train DNNs in a more energy-efficient way, we empirically investigate the impact of GPU Dynamic Voltage and Frequency Scaling (DVFS) on the energy consumption and performance of deep learning. Our experiments cover a wide range of GPU architectures, DVFS settings, and DNN configurations. We observe that, compared to the default core frequency settings of three tested GPUs, the optimal core frequency can help conserve 8.7%~23.1% energy consumption for different DNN training cases. Regarding the inference, the benefits vary from 19.6%~26.4%. Our findings suggest that GPU DVFS has great potentials to help develop energy efficient DNN training/inference schemes.
KW - Deep Convolutional Neural Network
KW - Dynamic Voltage and Frequency Scaling
KW - Graphics Processing Units
UR - http://www.scopus.com/inward/record.url?scp=85068642411&partnerID=8YFLogxK
U2 - 10.1145/3307772.3328315
DO - 10.1145/3307772.3328315
M3 - Conference proceeding
AN - SCOPUS:85068642411
T3 - e-Energy 2019 - Proceedings of the 10th ACM International Conference on Future Energy Systems
SP - 315
EP - 325
BT - e-Energy 2019 - Proceedings of the 10th ACM International Conference on Future Energy Systems
PB - Association for Computing Machinery (ACM)
Y2 - 25 June 2019 through 28 June 2019
ER -