Abstract
Large language model (LLM) unlearning has demonstrated its essential role in removing privacy and copyright-related responses, crucial for their legal and safe applications. However, the pursuit of complete unlearning often comes with substantial costs due to its compromises in their general functionality, leading to a notorious tradeoff between unlearning and retention. It motivates this paper to explore enhanced unlearning schemes that can mitigate this trade-off. Specifically, we propose Gradient Rectified Unlearning (GRU), an improved framework that regulates the directions of gradient updates during the unlearning procedure such that their side impacts on other, unrelated responses can be minimized. GRU is easy and general to implement, demonstrating practical effectiveness across a variety of well-established unlearning benchmarks. Our code is available at https://github.com/tmlr-group/GRU.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 42nd International Conference on Machine Learning, ICML 2025 |
| Publisher | ML Research Press |
| Pages | 64690-64710 |
| Number of pages | 21 |
| Publication status | Published - Jul 2025 |
| Event | 42nd International Conference on Machine Learning, ICML 2025 - Vancouver Convention Center, Vancouver, Canada Duration: 13 Jul 2025 → 19 Jul 2025 https://icml.cc/Conferences/2025 (Conference Website) https://icml.cc/virtual/2025/calendar (Conference Calendar) https://proceedings.mlr.press/v267/ (Conference Proceedings) |
Publication series
| Name | Proceedings of Machine Learning Research |
|---|---|
| Publisher | ML Research Press |
| Volume | 267 |
Conference
| Conference | 42nd International Conference on Machine Learning, ICML 2025 |
|---|---|
| Country/Territory | Canada |
| City | Vancouver |
| Period | 13/07/25 → 19/07/25 |
| Internet address |
|