Mind the Gap Between Prototypes and Images in Cross-domain Finetuning

Hongduan Tian, Feng Liu, Zhanke Zhou, Tongliang Liu, Chengqi Zhang, Bo Han*

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

In cross-domain few-shot classification (CFC), recent works mainly focus on adapting a simple transformation head on top of a frozen pre-trained backbone with few labeled data to project embeddings into a task-specific metric space where classification can be performed by measuring similarities between image instance and prototype representations. Technically, an assumption implicitly adopted in such a framework is that the prototype and image instance embeddings share the same representation transformation. However, in this paper, we find that there naturally exists a gap, which resembles the modality gap, between the prototype and image instance embeddings extracted from the frozen pre-trained backbone, and simply applying the same transformation during the adaptation phase constrains exploring the optimal representations and shrinks the gap between prototype and image representations. To solve this problem, we propose a simple yet effective method, contrastive prototype-image adaptation (CoPA), to adapt different transformations respectively for prototypes and images similarly to CLIP by treating prototypes as text prompts. Extensive experiments on Meta-Dataset demonstrate that CoPA achieves the state-of-the-art performance more efficiently. Meanwhile, further analyses also indicate that CoPA can learn better representation clusters, enlarge the gap, and achieve minimal validation loss at the enlarged gap. The project repository of CoPA is available at:https://github.com/tmlr-group/CoPA.

Original languageEnglish
Title of host publication38th Conference on Neural Information Processing Systems, NeurIPS 2024
EditorsA. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, C. Zhang
PublisherNeural Information Processing Systems Foundation
Number of pages39
ISBN (Electronic)9798331314385
Publication statusPublished - Dec 2024
Event38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver Convention Center , Vancouver, Canada
Duration: 9 Dec 202415 Dec 2024
https://neurips.cc/Conferences/2024
https://openreview.net/group?id=NeurIPS.cc/2024
https://proceedings.neurips.cc/paper_files/paper/2024

Publication series

NameAdvances in Neural Information Processing Systems
PublisherNeural information processing systems foundation
Volume37
ISSN (Print)1049-5258
NameNeurIPS Proceedings

Conference

Conference38th Conference on Neural Information Processing Systems, NeurIPS 2024
Country/TerritoryCanada
CityVancouver
Period9/12/2415/12/24
Internet address

Fingerprint

Dive into the research topics of 'Mind the Gap Between Prototypes and Images in Cross-domain Finetuning'. Together they form a unique fingerprint.

Cite this