TY - JOUR
T1 - Generalization error guaranteed auto-encoder-based nonlinear model reduction for operator learning
AU - Liu, Hao
AU - Dahal, Biraj
AU - Lai, Rongjie
AU - Liao, Wenjing
N1 - Funding Information:
This research is partially supported by National Natural Science Foundation of China 12201530, HKRGC ECS 22302123, NSF DMS-2401297, NSF DMS–2012652, NSF DMS-2145167 and DOE SC0024348.
Publisher Copyright:
© 2024 Elsevier Inc. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
PY - 2025/1
Y1 - 2025/1
N2 - Many physical processes in science and engineering are naturally represented by operators between infinite-dimensional function spaces. The problem of operator learning, in this context, seeks to extract these physical processes from empirical data, which is challenging due to the infinite or high dimensionality of data. An integral component in addressing this challenge is model reduction, which reduces both the data dimensionality and problem size. In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet). AENet first learns the latent variables of the input data and then learns the transformation from these latent variables to corresponding output data. Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations. Furthermore, we establish a mathematical and statistical estimation theory that analyzes the generalization error of AENet. Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process, while also demonstrating the robustness of AENet to noise.
AB - Many physical processes in science and engineering are naturally represented by operators between infinite-dimensional function spaces. The problem of operator learning, in this context, seeks to extract these physical processes from empirical data, which is challenging due to the infinite or high dimensionality of data. An integral component in addressing this challenge is model reduction, which reduces both the data dimensionality and problem size. In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet). AENet first learns the latent variables of the input data and then learns the transformation from these latent variables to corresponding output data. Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations. Furthermore, we establish a mathematical and statistical estimation theory that analyzes the generalization error of AENet. Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process, while also demonstrating the robustness of AENet to noise.
KW - Auto-encoder
KW - Deep learning theory
KW - Generalization error
KW - Model reduction
KW - Operator learning
UR - http://www.scopus.com/inward/record.url?scp=85207937859&partnerID=8YFLogxK
U2 - 10.1016/j.acha.2024.101717
DO - 10.1016/j.acha.2024.101717
M3 - Journal article
AN - SCOPUS:85207937859
SN - 1063-5203
VL - 74
JO - Applied and Computational Harmonic Analysis
JF - Applied and Computational Harmonic Analysis
M1 - 101717
ER -