Knowledge Distillation-Based Anomaly Detection via Adaptive Discrepancy Optimization

Ning Li, Ajian Liu, Zhenwei Zhu, Xuxin Lin, Hui Ma, Hong-Ning Dai, Yanyan Liang*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Knowledge distillation has emerged as a primary solution for anomaly detection, leveraging feature discrepancies between teacher–student (T–S) networks to locate anomalies. However, previous approaches suffer from ambiguous feature discrepancies, which hinder effective anomaly detection due to two main challenges: 1) overgeneralization, where the student network excessively mimics teacher features in anomalous regions, and 2) semantic bias between T–S networks in normal regions. To address these issues, we propose an Adaptive Discrepancy Optimization (Ado) block. The Ado block adaptively calibrates feature discrepancies by reducing overgeneralization in anomalous regions and selectively aligning semantic features in normal regions via learnable feature offsets. This versatile block can be seamlessly integrated into various distillation-based methods. Experimental results demonstrate that the Ado block significantly enhances performance across 11 different knowledge distillation frameworks on two widely used datasets. Notably, when integrated with the Ado block, RD4AD achieves a 22% relative improvement in pixel-level PRO on the VisA dataset. In addition, a real-world keyboard inspection application further validates the effectiveness of the Ado block.
Original languageEnglish
Number of pages12
JournalIEEE Transactions on Industrial Informatics
DOIs
Publication statusE-pub ahead of print - 11 Jun 2025

User-Defined Keywords

  • Anomaly detection
  • defect detection
  • knowledge distillation
  • overgeneralization problem
  • semantic bias

Fingerprint

Dive into the research topics of 'Knowledge Distillation-Based Anomaly Detection via Adaptive Discrepancy Optimization'. Together they form a unique fingerprint.

Cite this