TY - GEN
T1 - Supervised learning based algorithm selection for deep neural networks
AU - Shi, Shaohuai
AU - Xu, Pengfei
AU - CHU, Xiaowen
N1 - Funding Information:
We would like to thank all the reviewers for their insightful comments and valuable suggestions. This work is supported by Shenzhen Basic Research Grant SCI-2015-SZTIC-002. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
PY - 2017/7/2
Y1 - 2017/7/2
N2 - Many recent deep learning platforms rely on thirdparty libraries (such as cuBLAS) to utilize the computing power of modern hardware accelerators (such as GPUs). However, we observe that they may achieve suboptimal performance because the library functions are not used appropriately. In this paper, we target at optimizing the operations of multiplying a matrix with the transpose of another matrix (referred to as NT operation hereafter), which contribute half of the training time of fully connected deep neural networks. Rather than directly calling the library function, we propose a supervised learning based algorithm selection approach named MTNN, which uses a gradient boosted decision tree to select one from two alternative NT implementations intelligently: (1) calling the cuBLAS library function; (2) calling our proposed algorithm TNN that uses an efficient out-of-place matrix transpose. We evaluate the performance of MTNN on two modern GPUs: NVIDIA GTX 1080 and NVIDIA Titan X Pascal. MTNN can achieve 96% of prediction accuracy with very low computational overhead, which results in an average of 54% performance improvement on a range of NT operations. To further evaluate the impact of MTNN on the training process of deep neural networks, we have integrated MTNN into a popular deep learning platform Caffe. Our experimental results show that the revised Caffe can outperform the original one by an average of 28%. Both MTNN and the revised Caffe are open-source.
AB - Many recent deep learning platforms rely on thirdparty libraries (such as cuBLAS) to utilize the computing power of modern hardware accelerators (such as GPUs). However, we observe that they may achieve suboptimal performance because the library functions are not used appropriately. In this paper, we target at optimizing the operations of multiplying a matrix with the transpose of another matrix (referred to as NT operation hereafter), which contribute half of the training time of fully connected deep neural networks. Rather than directly calling the library function, we propose a supervised learning based algorithm selection approach named MTNN, which uses a gradient boosted decision tree to select one from two alternative NT implementations intelligently: (1) calling the cuBLAS library function; (2) calling our proposed algorithm TNN that uses an efficient out-of-place matrix transpose. We evaluate the performance of MTNN on two modern GPUs: NVIDIA GTX 1080 and NVIDIA Titan X Pascal. MTNN can achieve 96% of prediction accuracy with very low computational overhead, which results in an average of 54% performance improvement on a range of NT operations. To further evaluate the impact of MTNN on the training process of deep neural networks, we have integrated MTNN into a popular deep learning platform Caffe. Our experimental results show that the revised Caffe can outperform the original one by an average of 28%. Both MTNN and the revised Caffe are open-source.
KW - Deep Neural Networks
KW - GPU
KW - Linear Algebra
KW - Matrix Multiplication
KW - Transpose
UR - http://www.scopus.com/inward/record.url?scp=85048370637&partnerID=8YFLogxK
U2 - 10.1109/ICPADS.2017.00053
DO - 10.1109/ICPADS.2017.00053
M3 - Conference proceeding
AN - SCOPUS:85048370637
T3 - Proceedings of the International Conference on Parallel and Distributed Systems - ICPADS
SP - 344
EP - 351
BT - Proceedings - 2017 IEEE 23rd International Conference on Parallel and Distributed Systems, ICPADS 2017
PB - IEEE Computer Society
T2 - 23rd IEEE International Conference on Parallel and Distributed Systems, ICPADS 2017
Y2 - 15 December 2017 through 17 December 2017
ER -