Generalization error guaranteed auto-encoder-based nonlinear model reduction for operator learning

Hao Liu, Biraj Dahal, Rongjie Lai, Wenjing Liao*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Many physical processes in science and engineering are naturally represented by operators between infinite-dimensional function spaces. The problem of operator learning, in this context, seeks to extract these physical processes from empirical data, which is challenging due to the infinite or high dimensionality of data. An integral component in addressing this challenge is model reduction, which reduces both the data dimensionality and problem size. In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet). AENet first learns the latent variables of the input data and then learns the transformation from these latent variables to corresponding output data. Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations. Furthermore, we establish a mathematical and statistical estimation theory that analyzes the generalization error of AENet. Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process, while also demonstrating the robustness of AENet to noise.

Original languageEnglish
Article number101717
Number of pages42
JournalApplied and Computational Harmonic Analysis
Volume74
DOIs
Publication statusPublished - Jan 2025

Scopus Subject Areas

  • Applied Mathematics

User-Defined Keywords

  • Auto-encoder
  • Deep learning theory
  • Generalization error
  • Model reduction
  • Operator learning

Fingerprint

Dive into the research topics of 'Generalization error guaranteed auto-encoder-based nonlinear model reduction for operator learning'. Together they form a unique fingerprint.

Cite this