GenFT: A Generative Parameter-Efficient Fine-Tuning Method for Pretrained Foundation Models

Baoquan Zhang, Guangning Xu*, Michael K. Ng

*Corresponding author for this work

Research output: Working paperPreprint

Abstract

Pretrained Foundation Models (PFMs) have transformed numerous applications by enabling efficient adaptation to customized tasks. Parameter-Efficient Fine-Tuning (PEFT) has emerged as a resource-efficient alternative to full fine-tuning, especially leveraging reparameterized weights Δ⁢W to adapt models for downstream tasks. However, a critical yet underexplored question remains: can we utilize well-pretrained weights W0 to guide the update of task-specific Δ⁢W, avoiding inefficient training it from scratch? To end this, we propose Generative Parameter-Efficient Fine-Tuning (GenFT), a novel method that extracts structured, transferable information from W0 for efficient Δ⁢W training. To extract row and column structure information, GenFT applies row and column transformations to distill essential patterns from W0. A tailored policy further decomposes Δ⁢W into layer-shared and layer-specific components, balancing information reuse and individualized flexibility. GenFT is simple yet effective, achieving superior performance across CV and NLP tasks. Extensive experiments on VTAB-1K, FGVC, and GLUE benchmarks demonstrate that GenFT outperforms state-of-the-art PEFT methods, offering a new perspective for efficient model adaptation.
Original languageEnglish
PublisherCornell University
DOIs
Publication statusPublished - 21 May 2025

Publication series

NamearXiv

User-Defined Keywords

  • cs.LG

Fingerprint

Dive into the research topics of 'GenFT: A Generative Parameter-Efficient Fine-Tuning Method for Pretrained Foundation Models'. Together they form a unique fingerprint.

Cite this