Fast Randomized Low-Rank Adaptation of Pre-trained Language Models with PAC Regularization

Zijian Lei, Dong Qian, William K. Cheung

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

Low-rank adaptation (LoRA) achieves parameter efficient fine-tuning for large language models (LLMs) by decomposing the model weight update into a pair of low-rank projection matrices. Yet, the memory overhead restricts it to scale up when the model size increases. We propose Randomized LoRA (RLoRA) which adopts Randomized Walsh-Hadamard Transform to achieve significant reduction in the size of trainable parameters compared to LoRA. At the same time, it allows a PAC-Bayes reg-ularizer to be efficiently incorporated to improve generalization. We evaluate the effectiveness of RLoRA on LLMs RoBERTa, GPT-2 and LLaMA-7B using GLUE, E2E and math reasoning benchmarks. With a much lower memory requirement, RLoRA can give similar performance as the SOTA low-rank adaptation methods for these three tasks and significantly better performance under few-shot settings.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics ACL 2024
EditorsLun-Wei Ku, Andre Martins, Vivek Srikumar
PublisherAssociation for Computational Linguistics (ACL)
Pages5236-5249
Number of pages14
ISBN (Electronic)9798891760998
DOIs
Publication statusPublished - Aug 2024
Event62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Bangkok, Thailand
Duration: 11 Aug 202416 Aug 2024
https://2024.aclweb.org/
https://aclanthology.org/events/acl-2024/

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

Conference62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Country/TerritoryThailand
CityBangkok
Period11/08/2416/08/24
Internet address

Scopus Subject Areas

  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this