Abstract
Deep Learning Recommendation Models (DLRMs) have gained popularity in recommendation systems due to their effectiveness in handling large-scale recommendation tasks. The embedding layers of DLRMs have become the performance bottleneck due to their intensive needs on memory capacity and memory bandwidth. In this paper, we propose UpDLRM, which utilizes real-world processing-in-memory (PIM) hardware, UPMEM DPU, to boost the memory bandwidth and reduce recommendation latency. The parallel nature of the DPU memory can provide high aggregated bandwidth for the large number of irregular memory accesses in embedding lookups, thus offering great potential to reduce the inference latency. To fully utilize the DPU memory bandwidth, we further studied the embedding table partitioning problem to achieve good workload-balance and efficient data caching. Evaluations using real-world datasets show that, UpDLRM achieves much lower inference time for DLRMs compared to both CPU-only and CPU-GPU hybrid counterparts.
Original language | English |
---|---|
Number of pages | 6 |
Publication status | Published - 26 Jun 2024 |
Event | Design Automation Conference (DAC) - San Francisco, San Francisco, United States Duration: 23 Jun 2024 → 27 Jun 2024 https://www.dac.com/ |
Conference
Conference | Design Automation Conference (DAC) |
---|---|
Abbreviated title | DAC 2024 |
Country/Territory | United States |
City | San Francisco |
Period | 23/06/24 → 27/06/24 |
Internet address |