TY - GEN
T1 - Performance Evaluation of Deep Learning Tools in Docker Containers
AU - Xu, Pengfei
AU - Shi, Shaohuai
AU - CHU, Xiaowen
N1 - Funding Information:
The authors would like to thank all the reviewers for their insightful comments and valuablesuggestions. This work is supported by Shenzhen Basic Research Grant SCI-2015-SZTIC-002.
PY - 2017/11/15
Y1 - 2017/11/15
N2 - With the success of deep learning techniques in a broad range of application domains, many deep learning software frameworks have been developed and are being updated frequently to adapt to new hardware features and software libraries, which bring a big challenge for end users and system administrators. To address this problem, container techniques are widely used to simplify the deployment and management of deep learning software. However, it remains unknown whether container techniques bring any performance penalty to deep learning applications. The purpose of this work is to systematically evaluate the impact of docker container on the performance of deep learning applications. We first benchmark the performance of system components (IO, CPU and GPU) in a docker container and the host system and compare the results to see if there's any difference. According to our results, we find that computational intensive jobs, either running on CPU or GPU, have small overhead indicating docker containers can be applied to deep learning programs. Then we evaluate the performance of some popular deep learning tools deployed in a docker container and the host system. It turns out that the docker container will not cause noticeable drawbacks while running those deep learning tools. So encapsulating deep learning tool in a container is a feasible solution.
AB - With the success of deep learning techniques in a broad range of application domains, many deep learning software frameworks have been developed and are being updated frequently to adapt to new hardware features and software libraries, which bring a big challenge for end users and system administrators. To address this problem, container techniques are widely used to simplify the deployment and management of deep learning software. However, it remains unknown whether container techniques bring any performance penalty to deep learning applications. The purpose of this work is to systematically evaluate the impact of docker container on the performance of deep learning applications. We first benchmark the performance of system components (IO, CPU and GPU) in a docker container and the host system and compare the results to see if there's any difference. According to our results, we find that computational intensive jobs, either running on CPU or GPU, have small overhead indicating docker containers can be applied to deep learning programs. Then we evaluate the performance of some popular deep learning tools deployed in a docker container and the host system. It turns out that the docker container will not cause noticeable drawbacks while running those deep learning tools. So encapsulating deep learning tool in a container is a feasible solution.
UR - http://www.scopus.com/inward/record.url?scp=85040507662&partnerID=8YFLogxK
U2 - 10.1109/BIGCOM.2017.32
DO - 10.1109/BIGCOM.2017.32
M3 - Conference proceeding
AN - SCOPUS:85040507662
T3 - Proceedings - 2017 3rd International Conference on Big Data Computing and Communications, BigCom 2017
SP - 395
EP - 403
BT - Proceedings - 2017 3rd International Conference on Big Data Computing and Communications, BigCom 2017
PB - IEEE
T2 - 3rd International Conference on Big Data Computing and Communications, BigCom 2017
Y2 - 10 August 2017 through 11 August 2017
ER -