Abstract
Prefix-tuning, or more generally continuous prompt tuning, has become an essential paradigm of parameter-efficient transfer learning. Using a large pre-trained language model (PLM), prefix-tuning can obtain strong performance by training only a small portion of parameters. In this paper, we propose to understand and further develop prefix-tuning through the kernel lens. Specifically, we make an analogy between prefixes and inducing variables in kernel methods and hypothesize that prefixes serving as inducing variables would improve their overall mechanism. From the kernel estimator perspective, we suggest a new variant of prefix-tuning-inducer-tuning, which shares the exact mechanism as prefix-tuning while leveraging the residual form found in adapter-tuning. This mitigates the initialization issue in prefix-tuning. Through comprehensive empirical experiments on natural language understanding and generation tasks, we demonstrate that inducer-tuning can close the performance gap between prefix-tuning and fine-tuning.
Original language | English |
---|---|
Title of host publication | Proceeding of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 793-808 |
Number of pages | 16 |
ISBN (Print) | 9781959429401 |
Publication status | Published - Dec 2022 |
Event | 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 - Abu Dhabi, United Arab Emirates Duration: 7 Dec 2022 → 11 Dec 2022 https://2022.emnlp.org/ (Conference website) https://aclanthology.org/events/emnlp-2022/#2022emnlp-main (Conference proceeding) |
Conference
Conference | 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 |
---|---|
Country/Territory | United Arab Emirates |
City | Abu Dhabi |
Period | 7/12/22 → 11/12/22 |
Internet address |
|
Scopus Subject Areas
- Computational Theory and Mathematics
- Computer Science Applications
- Information Systems