Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond

Qizhou Wang, Jin Peng Zhou, Zhanke Zhou, Saebyeol Shin, Bo Han*, Kilian Q. Weinberger

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

1 Citation (Scopus)

Abstract

Large language models (LLMs) should undergo rigorous audits to identify potential risks, such as copyright and privacy infringements. Once these risks emerge, timely updates are crucial to remove undesirable responses, ensuring legal and safe model usage. It has spurred recent research into LLM unlearning, focusing on erasing targeted undesirable knowledge without compromising the integrity of other, non-targeted responses. Existing studies have introduced various unlearning objectives to pursue LLM unlearning without necessitating complete retraining. However, each of these objectives has unique properties, and no unified framework is currently available to comprehend them thoroughly. To fill the gap, we propose a toolkit of the gradient effect (G-effect), quantifying the impacts of unlearning objectives on model performance from a gradient perspective. A notable advantage is its broad ability to detail the unlearning impacts from various aspects across instances, updating steps, and LLM layers. Accordingly, the G-effect offers new insights into identifying drawbacks of existing unlearning objectives, further motivating us to explore a series of new solutions for their mitigation and improvements. Finally, we outline promising directions that merit further studies, aiming at contributing to the community to advance this important field. The code is publicly available at: https://github.com/tmlr-group/G-effect.

Original languageEnglish
Title of host publicationProceedings of the Thirteenth International Conference on Learning Representations, ICLR 2025
PublisherInternational Conference on Learning Representations, ICLR
Pages43169-43203
Number of pages35
ISBN (Electronic)9798331320850
Publication statusPublished - 24 Apr 2025
Event13th International Conference on Learning Representations, ICLR 2025 - , Singapore
Duration: 24 Apr 202528 Apr 2025
https://iclr.cc/Conferences/2025 (Conference website)
https://openreview.net/group?id=ICLR.cc/2025/Conference#tab-accept-oral (Conference proceedings)

Publication series

NameInternational Conference on Learning Representations, ICLR

Conference

Conference13th International Conference on Learning Representations, ICLR 2025
Country/TerritorySingapore
Period24/04/2528/04/25
Internet address

Fingerprint

Dive into the research topics of 'Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond'. Together they form a unique fingerprint.

Cite this