Massively parallel network coding on GPUs

Xiaowen Chu*, Kaiyong Zhao, Mea Wang

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

30 Citations (Scopus)

Abstract

Network coding has recently been widely applied in various networks for system throughput improvement and/or resilience to network dynamics. However, the computational overhead introduced by the network coding operations is not negligible and has become the cornerstone for real deployment of network coding. In this paper, we exploit the computing power of contemporary Graphic Processing Units (GPUs) to accelerate the network coding operations. We proposed three parallel algorithms that maximize the parallelism of the encoding and decoding processes, i.e., the power of GPUs is fully utilized. This paper also shares our optimization design choices and our workarounds to the challenges encountered in working with GPUs. With our implementation of the algorithms, we are able to achieve up to 12 times of speedup over the highly optimized CPU counterpart, using the NVIDIA GPU and the Computer Unified Device Architecture (CUDA) programming model.

Original languageEnglish
Title of host publication2008 IEEE International Performance Computing and Communications Conference, IPCCC 2008
Pages144-151
Number of pages8
DOIs
Publication statusPublished - 2008
Event2008 IEEE International Performance Computing and Communications Conference, IPCCC 2008 - Austin, TX, United States
Duration: 7 Dec 20089 Dec 2008

Publication series

NameConference Proceedings of the IEEE International Performance, Computing, and Communications Conference

Conference

Conference2008 IEEE International Performance Computing and Communications Conference, IPCCC 2008
Country/TerritoryUnited States
CityAustin, TX
Period7/12/089/12/08

User-Defined Keywords

  • CUDA
  • GPU computing
  • Network coding

Fingerprint

Dive into the research topics of 'Massively parallel network coding on GPUs'. Together they form a unique fingerprint.

Cite this