Abstract
Detecting out-of-distribution (OOD) inputs plays a crucial role in guaranteeing the reliability of deep neural networks (DNNs) when deployed in real-world scenarios. However, DNNs typically exhibit overconfidence in OOD samples, which is attributed to the similarity in patterns between OOD and in-distribution (ID) samples. To mitigate this overconfidence, advanced approaches suggest the incorporation of auxiliary OOD samples during model training, where the outliers are assigned with an equal likelihood of belonging to any category. However, identifying outliers that share patterns with ID samples poses a significant challenge. To address the challenge, we propose a novel method, Virtual Outlier Smoothing (VOSo), which constructs auxiliary outliers using ID samples, thereby eliminating the need to search for OOD samples. Specifically, VOSo creates these virtual outliers by perturbing the semantic regions of ID samples and infusing patterns from other ID samples. For instance, a virtual outlier might consist of a cat’s face with a dog’s nose, where the cat’s face serves as the semantic feature for model prediction. Meanwhile, VOSo adjusts the labels of virtual OOD samples based on the extent of semantic region perturbation, aligning with the notion that virtual outliers may contain ID patterns. Extensive experiments are conducted on diverse OOD detection benchmarks, demonstrating the effectiveness of the proposed VOSo. Our code will be available at https://github.com/junz-debug/VOSo.
Original language | English |
---|---|
Pages (from-to) | 724–741 |
Number of pages | 18 |
Journal | International Journal of Computer Vision |
Volume | 133 |
Early online date | 14 Aug 2024 |
DOIs | |
Publication status | E-pub ahead of print - 14 Aug 2024 |
Scopus Subject Areas
- Software
- Artificial Intelligence
- Computer Vision and Pattern Recognition
User-Defined Keywords
- Label smoothing
- Learning with virtual data
- Out-of-distribution detection
- Outlier exposure