TY - JOUR
T1 - Semi-Supervised Domain Generalization with Stochastic StyleMatch
AU - Zhou, Kaiyang
AU - Loy, Chen Change
AU - Liu, Ziwei
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2023/9
Y1 - 2023/9
N2 - Ideally, visual learning algorithms should be generalizable, for dealing with any unseen domain shift when deployed in a new target environment; and data-efficient, for reducing development costs by using as little labels as possible. To this end, we study semi-supervised domain generalization (SSDG), which aims to learn a domain-generalizable model using multi-source, partially-labeled training data. We design two benchmarks that cover state-of-the-art methods developed in two related fields, i.e., domain generalization (DG) and semi-supervised learning (SSL). We find that the DG methods, which by design are unable to handle unlabeled data, perform poorly with limited labels in SSDG; the SSL methods, especially FixMatch, obtain much better results but are still far away from the basic vanilla model trained using full labels. We propose StyleMatch, a simple approach that extends FixMatch with a couple of new ingredients tailored for SSDG: (1) stochastic modeling for reducing overfitting in scarce labels, and (2) multi-view consistency learning for enhancing domain generalization. Despite the concise designs, StyleMatch achieves significant improvements in SSDG. We hope our approach and the comprehensive benchmarks can pave the way for future research on generalizable and data-efficient learning systems. The source code is released at https://github.com/KaiyangZhou/ssdg-benchmark .
AB - Ideally, visual learning algorithms should be generalizable, for dealing with any unseen domain shift when deployed in a new target environment; and data-efficient, for reducing development costs by using as little labels as possible. To this end, we study semi-supervised domain generalization (SSDG), which aims to learn a domain-generalizable model using multi-source, partially-labeled training data. We design two benchmarks that cover state-of-the-art methods developed in two related fields, i.e., domain generalization (DG) and semi-supervised learning (SSL). We find that the DG methods, which by design are unable to handle unlabeled data, perform poorly with limited labels in SSDG; the SSL methods, especially FixMatch, obtain much better results but are still far away from the basic vanilla model trained using full labels. We propose StyleMatch, a simple approach that extends FixMatch with a couple of new ingredients tailored for SSDG: (1) stochastic modeling for reducing overfitting in scarce labels, and (2) multi-view consistency learning for enhancing domain generalization. Despite the concise designs, StyleMatch achieves significant improvements in SSDG. We hope our approach and the comprehensive benchmarks can pave the way for future research on generalizable and data-efficient learning systems. The source code is released at https://github.com/KaiyangZhou/ssdg-benchmark .
KW - Semi-Supervised Domain Generalization
KW - Image recognition
UR - http://www.scopus.com/inward/record.url?scp=85160959010&partnerID=8YFLogxK
U2 - 10.1007/s11263-023-01821-x
DO - 10.1007/s11263-023-01821-x
M3 - Journal article
AN - SCOPUS:85160959010
SN - 0920-5691
VL - 131
SP - 2377
EP - 2387
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
IS - 9
ER -