TY - CONF
T1 - Out-of-distribution Detection with Implicit Outlier Transformation
AU - Wang, Qizhou
AU - Ye, Junjie
AU - Liu, Feng
AU - Dai, Quanyu
AU - Kalander, Marcus
AU - Liu, Tongliang
AU - Hao, Jianye
AU - Han, Bo
N1 - Funding Information:
QZW and BH were supported by NSFC Young Scientists Fund No. 62006202, Guangdong Basic and Applied Basic Research Foundation No. 2022A1515011652, RGC Early Career Scheme No. 22200720, RGC Research Matching Grant Scheme No. RMGS20221102, No. RMGS20221306 and No. RMGS20221309. BH was also supported by CAAI-Huawei MindSpore Open Fund and HKBU CSD Departmental Incentive Grant. TLL was partially supported by Australian Research Council Projects IC-190100031, LP-220100527, DP-220102121, and FT-220100318.
PY - 2023/5/1
Y1 - 2023/5/1
N2 - Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection, enhancing detection capability via model fine-tuning with surrogate OOD data. However, surrogate data typically deviate from test OOD data. Thus, the performance of OE when facing unseen OOD data, can be weaken. To address this issue, we propose a novel OE-based approach that makes the model perform well for unseen OOD situations, even for unseen OOD cases. It leads to a min-max learning scheme---searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for the uniform performance in OOD detection. In our realization, these worst OOD data are synthesized by transforming original surrogate ones, where the associated transform functions are learned implicitly based on our novel insight that model perturbation leads to data transformation. Our methodology offers an efficient way of synthesizing OOD data, which can further benefit the detection model, besides the surrogate OOD data. We conduct extensive experiments under various OOD detection setups, demonstrating the effectiveness of our method against its advanced counterparts.
AB - Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection, enhancing detection capability via model fine-tuning with surrogate OOD data. However, surrogate data typically deviate from test OOD data. Thus, the performance of OE when facing unseen OOD data, can be weaken. To address this issue, we propose a novel OE-based approach that makes the model perform well for unseen OOD situations, even for unseen OOD cases. It leads to a min-max learning scheme---searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for the uniform performance in OOD detection. In our realization, these worst OOD data are synthesized by transforming original surrogate ones, where the associated transform functions are learned implicitly based on our novel insight that model perturbation leads to data transformation. Our methodology offers an efficient way of synthesizing OOD data, which can further benefit the detection model, besides the surrogate OOD data. We conduct extensive experiments under various OOD detection setups, demonstrating the effectiveness of our method against its advanced counterparts.
UR - https://iclr.cc/virtual/2023/poster/12179
UR - http://www.scopus.com/inward/record.url?scp=85199893185&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2303.05033
DO - 10.48550/arXiv.2303.05033
M3 - Conference paper
SP - 1
EP - 22
T2 - 11th International Conference on Learning Representations, ICLR 2023
Y2 - 1 May 2023 through 5 May 2023
ER -