TY - JOUR
T1 - Domain Generalization
T2 - A Survey
AU - Zhou, Kaiyang
AU - Liu, Ziwei
AU - Qiao, Yu
AU - Xiang, Tao
AU - Loy, Chen Change
N1 - A*STAR
Industry Alignment Fund - Industry Collaboration Projects
10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 61876176 and U1713208)
National Key Research and Development Program of China (Grant Number: 2020YFC2004800)
Science and Technology Service Network Initiative of Chinese Academy of Sciences (Grant Number: KFJ-STS-QYZX-092)
Shanghai Committee of Science and Technology, China (Grant Number: 20DZ1100800)
Publisher Copyright:
© 1979-2012 IEEE.
PY - 2023/4
Y1 - 2023/4
N2 - Generalization to out-of-distribution (OOD) data is a capability natural to humans yet challenging for machines to reproduce. This is because most learning algorithms strongly rely on the i.i.d. assumption on source/target data, which is often violated in practice due to domain shift. Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning. Over the last ten years, research in DG has made great progress, leading to a broad spectrum of methodologies, e.g., those based on domain alignment, meta-learning, data augmentation, or ensemble learning, to name a few; DG has also been studied in various application areas including computer vision, speech recognition, natural language processing, medical imaging, and reinforcement learning. In this paper, for the first time a comprehensive literature review in DG is provided to summarize the developments over the past decade. Specifically, we first cover the background by formally defining DG and relating it to other relevant fields like domain adaptation and transfer learning. Then, we conduct a thorough review into existing methods and theories. Finally, we conclude this survey with insights and discussions on future research directions.
AB - Generalization to out-of-distribution (OOD) data is a capability natural to humans yet challenging for machines to reproduce. This is because most learning algorithms strongly rely on the i.i.d. assumption on source/target data, which is often violated in practice due to domain shift. Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning. Over the last ten years, research in DG has made great progress, leading to a broad spectrum of methodologies, e.g., those based on domain alignment, meta-learning, data augmentation, or ensemble learning, to name a few; DG has also been studied in various application areas including computer vision, speech recognition, natural language processing, medical imaging, and reinforcement learning. In this paper, for the first time a comprehensive literature review in DG is provided to summarize the developments over the past decade. Specifically, we first cover the background by formally defining DG and relating it to other relevant fields like domain adaptation and transfer learning. Then, we conduct a thorough review into existing methods and theories. Finally, we conclude this survey with insights and discussions on future research directions.
KW - Out-of-distribution generalization
KW - domain shift
KW - model robustness
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=85135751422&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2022.3195549
DO - 10.1109/TPAMI.2022.3195549
M3 - Journal article
C2 - 35914036
AN - SCOPUS:85135751422
SN - 0162-8828
VL - 45
SP - 4396
EP - 4415
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 4
ER -