Learning Where to Edit Vision Transformers

Yunqiao Yang, Long Kai Huang*, Shengzhuang Chen, Kede Ma, Ying Wei*

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

Model editing aims to data-efficiently correct predictive errors of large pre-trained models while ensuring generalization to neighboring failures and locality to minimize unintended effects on unrelated examples. While significant progress has been made in editing Transformer-based large language models, effective strategies for editing vision Transformers (ViTs) in computer vision remain largely untapped. In this paper, we take initial steps towards correcting predictive errors of ViTs, particularly those arising from subpopulation shifts. Taking a locate-then-edit approach, we first address the “where-to-edit” challenge by meta-learning a hypernetwork on CutMix-augmented data generated for editing reliability. This trained hypernetwork produces generalizable binary masks that identify a sparse subset of structured model parameters, responsive to real-world failure samples. Afterward, we solve the “how-to-edit” problem by simply fine-tuning the identified parameters using a variant of gradient descent to achieve successful edits. To validate our method, we construct an editing benchmark that introduces subpopulation shifts towards natural underrepresented images and AI-generated images, thereby revealing the limitations of pre-trained ViTs for object recognition. Our approach not only achieves superior performance on the proposed benchmark but also allows for adjustable trade-offs between generalization and locality. Our code is available at https://github.com/hustyyq/Where-to-Edit.

Original languageEnglish
Title of host publication38th Conference on Neural Information Processing Systems, NeurIPS 2024
EditorsA. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, C. Zhang
PublisherNeural Information Processing Systems Foundation
Pages1-29
Number of pages29
ISBN (Electronic)9798331314385
Publication statusPublished - Dec 2024
Event38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver Convention Center , Vancouver, Canada
Duration: 9 Dec 202415 Dec 2024
https://neurips.cc/Conferences/2024
https://openreview.net/group?id=NeurIPS.cc/2024
https://proceedings.neurips.cc/paper_files/paper/2024 (Conference Proceedings)

Publication series

NameAdvances in Neural Information Processing Systems
Volume37
ISSN (Print)1049-5258
NameNeurIPS Proceedings

Conference

Conference38th Conference on Neural Information Processing Systems, NeurIPS 2024
Country/TerritoryCanada
CityVancouver
Period9/12/2415/12/24
Internet address

Fingerprint

Dive into the research topics of 'Learning Where to Edit Vision Transformers'. Together they form a unique fingerprint.

Cite this