Skip to main navigation Skip to search Skip to main content

Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture

  • Chenqi Kong
  • , Anwei Luo
  • , Peijun Bao
  • , Haoliang Li
  • , Renjie Wan
  • , Zengwei Zheng
  • , Anderson Rocha
  • , Alex C. Kot

Research output: Contribution to journalJournal articlepeer-review

1 Citation (Scopus)

Abstract

Open-set face forgery detection poses significant security threats and presents substantial challenges for existing detection models. These detectors primarily have two limitations: they cannot generalize across unknown forgery domains or inefficiently adapt to new data. To address these issues, we introduce an approach that is both general and parameter-efficient for face forgery detection. Our method builds on the assumption that different forgery source domains exhibit distinct style statistics. Specifically, we design a forgery-style-mixture formulation that augments the diversity of forgery source domains, enhancing the model’s generalizability across unseen domains. In addition, previous methods typically require fully fine-tuning pretrained networks, consuming substantial time and computational resources. Drawing on recent advancements in vision transformers (ViT) for face forgery detection, we develop a parameter-efficient ViT-based detection model that includes lightweight forgery feature extraction modules and enables the model to extract global and local forgery clues simultaneously. We only optimize the inserted lightweight modules during training, maintaining the original ViT structure with its pre-trained weights. This training strategy effectively preserves the informative pre-trained knowledge while flexibly adapting the model to the task of Deepfake detection. Extensive experimental results demonstrate that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters, representing an important step toward open-set Deepfake detection in the wild.
Original languageEnglish
Number of pages17
JournalIEEE Transactions on Circuits and Systems for Video Technology
DOIs
Publication statusPublished - 2 Mar 2026

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 16 - Peace, Justice and Strong Institutions
    SDG 16 Peace, Justice and Strong Institutions

User-Defined Keywords

  • Deepfakes
  • face forgery detection
  • generalization
  • open-set
  • parameter-efficient learning
  • robustness
  • style mixture

Fingerprint

Dive into the research topics of 'Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture'. Together they form a unique fingerprint.

Cite this