UpDLRM: Accelerating Personalized Recommendation using Real-World PIM Architecture

Sitian Chen, Haobin Tan, Amelie Chi Zhou*, Yusen Li, Pavan Balaji

*Corresponding author for this work

Research output: Contribution to conferenceConference paperpeer-review

6 Downloads (Pure)


Deep Learning Recommendation Models (DLRMs) have gained popularity in recommendation systems due to their effectiveness in handling large-scale recommendation tasks. The embedding layers of DLRMs have become the performance bottleneck due to their intensive needs on memory capacity and memory bandwidth. In this paper, we propose UpDLRM, which utilizes real-world processing-in-memory (PIM) hardware, UPMEM DPU, to boost the memory bandwidth and reduce recommendation latency. The parallel nature of the DPU memory can provide high aggregated bandwidth for the large number of irregular memory accesses in embedding lookups, thus offering great potential to reduce the inference latency. To fully utilize the DPU memory bandwidth, we further studied the embedding table partitioning problem to achieve good workload-balance and efficient data caching. Evaluations using real-world datasets show that, UpDLRM achieves much lower inference time for DLRMs compared to both CPU-only and CPU-GPU hybrid counterparts.
Original languageEnglish
Number of pages6
Publication statusPublished - 26 Jun 2024
EventDesign Automation Conference (DAC) - San Francisco, San Francisco, United States
Duration: 23 Jun 202427 Jun 2024


ConferenceDesign Automation Conference (DAC)
Abbreviated titleDAC 2024
Country/TerritoryUnited States
City San Francisco
Internet address


Dive into the research topics of 'UpDLRM: Accelerating Personalized Recommendation using Real-World PIM Architecture'. Together they form a unique fingerprint.

Cite this