Dissecting GPU Memory Hierarchy Through Microbenchmarking

Xinxin Mei, Xiaowen CHU

Research output: Contribution to journalJournal articlepeer-review

120 Citations (Scopus)


Memory access efficiency is a key factor in fully utilizing the computational power of graphics processing units (GPUs). However, many details of the GPU memory hierarchy are not released by GPU vendors. In this paper, we propose a novel fine-grained microbenchmarking approach and apply it to three generations of NVIDIA GPUs, namely Fermi, Kepler, and Maxwell, to expose the previously unknown characteristics of their memory hierarchies. Specifically, we investigate the structures of different GPU cache systems, such as the data cache, the texture cache and the translation look-aside buffer (TLB). We also investigate the throughput and access latency of GPU global memory and shared memory. Our microbenchmark results offer a better understanding of the mysterious GPU memory hierarchy, which will facilitate the software optimization and modelling of GPU architectures. To the best of our knowledge, this is the first study to reveal the cache properties of Kepler and Maxwell GPUs, and the superiority of Maxwell in shared memory performance under bank conflict.

Original languageEnglish
Article number7445236
Pages (from-to)72-86
Number of pages15
JournalIEEE Transactions on Parallel and Distributed Systems
Issue number1
Publication statusPublished - 1 Jan 2017

Scopus Subject Areas

  • Signal Processing
  • Hardware and Architecture
  • Computational Theory and Mathematics

User-Defined Keywords

  • cache structure
  • CUDA
  • GPU
  • memory hierarchy
  • throughput


Dive into the research topics of 'Dissecting GPU Memory Hierarchy Through Microbenchmarking'. Together they form a unique fingerprint.

Cite this