TY - JOUR
T1 - Out-of-Distribution Detection with Virtual Outlier Smoothing
AU - Nie, Jun
AU - Luo, Yadan
AU - Ye, Shanshan
AU - Zhang, Yonggang
AU - Tian, Xinmei
AU - Fang, Zhen
N1 - Funding Information:
Open access funding provided by Hong Kong Baptist University Library. This work was supported in part by NSFC No. 62222117 and the Australian Research Council (DE240100105, DP24010181).
Publisher copyright:
© The Author(s) 2024
PY - 2024/8/14
Y1 - 2024/8/14
N2 - Detecting out-of-distribution (OOD) inputs plays a crucial role in guaranteeing the reliability of deep neural networks (DNNs) when deployed in real-world scenarios. However, DNNs typically exhibit overconfidence in OOD samples, which is attributed to the similarity in patterns between OOD and in-distribution (ID) samples. To mitigate this overconfidence, advanced approaches suggest the incorporation of auxiliary OOD samples during model training, where the outliers are assigned with an equal likelihood of belonging to any category. However, identifying outliers that share patterns with ID samples poses a significant challenge. To address the challenge, we propose a novel method, Virtual Outlier Smoothing (VOSo), which constructs auxiliary outliers using ID samples, thereby eliminating the need to search for OOD samples. Specifically, VOSo creates these virtual outliers by perturbing the semantic regions of ID samples and infusing patterns from other ID samples. For instance, a virtual outlier might consist of a cat’s face with a dog’s nose, where the cat’s face serves as the semantic feature for model prediction. Meanwhile, VOSo adjusts the labels of virtual OOD samples based on the extent of semantic region perturbation, aligning with the notion that virtual outliers may contain ID patterns. Extensive experiments are conducted on diverse OOD detection benchmarks, demonstrating the effectiveness of the proposed VOSo. Our code will be available at https://github.com/junz-debug/VOSo.
AB - Detecting out-of-distribution (OOD) inputs plays a crucial role in guaranteeing the reliability of deep neural networks (DNNs) when deployed in real-world scenarios. However, DNNs typically exhibit overconfidence in OOD samples, which is attributed to the similarity in patterns between OOD and in-distribution (ID) samples. To mitigate this overconfidence, advanced approaches suggest the incorporation of auxiliary OOD samples during model training, where the outliers are assigned with an equal likelihood of belonging to any category. However, identifying outliers that share patterns with ID samples poses a significant challenge. To address the challenge, we propose a novel method, Virtual Outlier Smoothing (VOSo), which constructs auxiliary outliers using ID samples, thereby eliminating the need to search for OOD samples. Specifically, VOSo creates these virtual outliers by perturbing the semantic regions of ID samples and infusing patterns from other ID samples. For instance, a virtual outlier might consist of a cat’s face with a dog’s nose, where the cat’s face serves as the semantic feature for model prediction. Meanwhile, VOSo adjusts the labels of virtual OOD samples based on the extent of semantic region perturbation, aligning with the notion that virtual outliers may contain ID patterns. Extensive experiments are conducted on diverse OOD detection benchmarks, demonstrating the effectiveness of the proposed VOSo. Our code will be available at https://github.com/junz-debug/VOSo.
KW - Label smoothing
KW - Learning with virtual data
KW - Out-of-distribution detection
KW - Outlier exposure
UR - http://www.scopus.com/inward/record.url?scp=85201240839&partnerID=8YFLogxK
U2 - 10.1007/s11263-024-02210-8
DO - 10.1007/s11263-024-02210-8
M3 - Journal article
SN - 0920-5691
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
ER -