Supervised learning based algorithm selection for deep neural networks

Shaohuai Shi, Pengfei Xu, Xiaowen CHU

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

Many recent deep learning platforms rely on thirdparty libraries (such as cuBLAS) to utilize the computing power of modern hardware accelerators (such as GPUs). However, we observe that they may achieve suboptimal performance because the library functions are not used appropriately. In this paper, we target at optimizing the operations of multiplying a matrix with the transpose of another matrix (referred to as NT operation hereafter), which contribute half of the training time of fully connected deep neural networks. Rather than directly calling the library function, we propose a supervised learning based algorithm selection approach named MTNN, which uses a gradient boosted decision tree to select one from two alternative NT implementations intelligently: (1) calling the cuBLAS library function; (2) calling our proposed algorithm TNN that uses an efficient out-of-place matrix transpose. We evaluate the performance of MTNN on two modern GPUs: NVIDIA GTX 1080 and NVIDIA Titan X Pascal. MTNN can achieve 96% of prediction accuracy with very low computational overhead, which results in an average of 54% performance improvement on a range of NT operations. To further evaluate the impact of MTNN on the training process of deep neural networks, we have integrated MTNN into a popular deep learning platform Caffe. Our experimental results show that the revised Caffe can outperform the original one by an average of 28%. Both MTNN and the revised Caffe are open-source.

Original languageEnglish
Title of host publicationProceedings - 2017 IEEE 23rd International Conference on Parallel and Distributed Systems, ICPADS 2017
PublisherIEEE Computer Society
Pages344-351
Number of pages8
ISBN (Electronic)9781538621295
DOIs
Publication statusPublished - 2 Jul 2017
Event23rd IEEE International Conference on Parallel and Distributed Systems, ICPADS 2017 - Shenzhen, China
Duration: 15 Dec 201717 Dec 2017

Publication series

NameProceedings of the International Conference on Parallel and Distributed Systems - ICPADS
Volume2017-December
ISSN (Print)1521-9097

Conference

Conference23rd IEEE International Conference on Parallel and Distributed Systems, ICPADS 2017
Country/TerritoryChina
CityShenzhen
Period15/12/1717/12/17

Scopus Subject Areas

  • Hardware and Architecture

User-Defined Keywords

  • Deep Neural Networks
  • GPU
  • Linear Algebra
  • Matrix Multiplication
  • Transpose

Fingerprint

Dive into the research topics of 'Supervised learning based algorithm selection for deep neural networks'. Together they form a unique fingerprint.

Cite this