Abstract
Deep Learning Recommendation Models (DLRMs) have gained popularity in recommendation systems due to their effectiveness in handling large-scale recommendation tasks. The embedding layers of DLRMs have become the performance bottleneck due to their intensive needs on memory capacity and memory bandwidth. In this paper, we propose UpDLRM, which utilizes real-world processing-in-memory (PIM) hardware, UPMEM DPU, to boost the memory bandwidth and reduce recommendation latency. The parallel nature of the DPU memory can provide high aggregated bandwidth for the large number of irregular memory accesses in embedding lookups, thus offering great potential to reduce the inference latency. To fully utilize the DPU memory bandwidth, we further studied the embedding table partitioning problem to achieve good workload-balance and efficient data caching. Evaluations using real-world datasets show that, UpDLRM achieves much lower inference time for DLRMs compared to both CPU-only and CPU-GPU hybrid counterparts.
Original language | English |
---|---|
Title of host publication | DAC '24: Proceedings of the 61st ACM/IEEE Design Automation Conference |
Publisher | Association for Computing Machinery (ACM) |
Number of pages | 6 |
ISBN (Electronic) | 9798400706011 |
DOIs | |
Publication status | Published - 7 Nov 2024 |
Event | 61st ACM/IEEE Design Automation Conference, DAC 2024 - San Francisco, San Francisco, United States Duration: 23 Jun 2024 → 27 Jun 2024 https://dl.acm.org/doi/proceedings/10.1145/3649329 (Conference proceedings) https://www.dac.com/ |
Publication series
Name | DAC: Design Automation Conference |
---|
Conference
Conference | 61st ACM/IEEE Design Automation Conference, DAC 2024 |
---|---|
Abbreviated title | DAC 2024 |
Country/Territory | United States |
City | San Francisco |
Period | 23/06/24 → 27/06/24 |
Internet address |
|