TY - JOUR
T1 - Deep nonparametric estimation of intrinsic data structures by chart autoencoders
T2 - Generalization error and robustness
AU - Liu, Hao
AU - Havrilla, Alex
AU - Lai, Rongjie
AU - Liao, Wenjing
N1 - This research is partially supported by HKBU 179356 , NSFC 12201530 , HKRGC ECS 22302123 , NSF DMS–2012652 , NSF DMS-2145167 and NSF DMS–2134168 .
Publisher Copyright:
© 2023 Elsevier Inc.
PY - 2024/1
Y1 - 2024/1
N2 - Autoencoders have demonstrated remarkable success in learning low-dimensional latent features of high-dimensional data across various applications. Assuming that data are sampled near a low-dimensional manifold, we employ chart autoencoders, which encode data into low-dimensional latent features on a collection of charts, preserving the topology and geometry of the data manifold. Our paper establishes statistical guarantees on the generalization error of chart autoencoders, and we demonstrate their denoising capabilities by considering n noisy training samples, along with their noise-free counterparts, on a d-dimensional manifold. By training autoencoders, we show that chart autoencoders can effectively denoise the input data with normal noise. We prove that, under proper network architectures, chart autoencoders achieve a squared generalization error in the order of n−[Formula Presented]log4n, which depends on the intrinsic dimension of the manifold and only weakly depends on the ambient dimension and noise level. We further extend our theory on data with noise containing both normal and tangential components, where chart autoencoders still exhibit a denoising effect for the normal component. As a special case, our theory also applies to classical autoencoders, as long as the data manifold has a global parametrization. Our results provide a solid theoretical foundation for the effectiveness of autoencoders, which is further validated through several numerical experiments.
AB - Autoencoders have demonstrated remarkable success in learning low-dimensional latent features of high-dimensional data across various applications. Assuming that data are sampled near a low-dimensional manifold, we employ chart autoencoders, which encode data into low-dimensional latent features on a collection of charts, preserving the topology and geometry of the data manifold. Our paper establishes statistical guarantees on the generalization error of chart autoencoders, and we demonstrate their denoising capabilities by considering n noisy training samples, along with their noise-free counterparts, on a d-dimensional manifold. By training autoencoders, we show that chart autoencoders can effectively denoise the input data with normal noise. We prove that, under proper network architectures, chart autoencoders achieve a squared generalization error in the order of n−[Formula Presented]log4n, which depends on the intrinsic dimension of the manifold and only weakly depends on the ambient dimension and noise level. We further extend our theory on data with noise containing both normal and tangential components, where chart autoencoders still exhibit a denoising effect for the normal component. As a special case, our theory also applies to classical autoencoders, as long as the data manifold has a global parametrization. Our results provide a solid theoretical foundation for the effectiveness of autoencoders, which is further validated through several numerical experiments.
KW - Chart autoencoder
KW - Deep learning theory
KW - Dimension reduction
KW - Generalization error
KW - Manifold model
UR - http://www.scopus.com/inward/record.url?scp=85174397310&partnerID=8YFLogxK
U2 - 10.1016/j.acha.2023.101602
DO - 10.1016/j.acha.2023.101602
M3 - Journal article
AN - SCOPUS:85174397310
SN - 1063-5203
VL - 68
JO - Applied and Computational Harmonic Analysis
JF - Applied and Computational Harmonic Analysis
M1 - 101602
ER -