TY - JOUR
T1 - Low-Rank Tensor Function Representation for Multi-Dimensional Data Recovery
AU - Luo, Yisi
AU - Zhao, Xile
AU - Li, Zhemin
AU - Ng, Michael K.
AU - Meng, Deyu
N1 - Funding information:
This work was supported in part by the National Key R&D Program of China under Grant 2020YFA0713900, in part by China NSFC projects under Grants 12371456, 62131005, 12171072, 61721002, and 12226004, and in part by the National Key Research and Development Program of China under Grant 2020YFA0714001. The work of Michael K. Ng was supported in part by HKRGC GRF under Grants 12300519, 17201020, and 17300021, in part by HKRGC CRF under Grants C1013-21GF and C7004-21GF, and in part by Joint NSFC and RGC under Grant N-HKU769/21.
Publisher copyright:
© 2023 IEEE.
PY - 2024/5
Y1 - 2024/5
N2 - Since higher-order tensors are naturally suitable for representing multi-dimensional data in real-world, e.g., color images and videos, low-rank tensor representation has become one of the emerging areas in machine learning and computer vision. However, classical low-rank tensor representations can solely represent multi-dimensional discrete data on meshgrid, which hinders their potential applicability in many scenarios beyond meshgrid. To break this barrier, we propose a low-rank tensor function representation (LRTFR) parameterized by multilayer perceptrons (MLPs), which can continuously represent data beyond meshgrid with powerful representation abilities. Specifically, the suggested tensor function, which maps an arbitrary coordinate to the corresponding value, can continuously represent data in an infinite real space. Parallel to discrete tensors, we develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization, and utilize MLPs to paramterize factor functions of the tensor function factorization. We theoretically justify that both low-rank and smooth regularizations are harmoniously unified in LRTFR, which leads to high effectiveness and efficiency for data continuous representation. Extensive multi-dimensional data recovery applications arising from image processing (image inpainting and denoising), machine learning (hyperparameter optimization), and computer graphics (point cloud upsampling) substantiate the superiority and versatility of our method as compared with state-of-the-art methods. Especially, the experiments beyond the original meshgrid resolution (hyperparameter optimization) or even beyond meshgrid (point cloud upsampling) validate the favorable performances of our method for continuous representation.
AB - Since higher-order tensors are naturally suitable for representing multi-dimensional data in real-world, e.g., color images and videos, low-rank tensor representation has become one of the emerging areas in machine learning and computer vision. However, classical low-rank tensor representations can solely represent multi-dimensional discrete data on meshgrid, which hinders their potential applicability in many scenarios beyond meshgrid. To break this barrier, we propose a low-rank tensor function representation (LRTFR) parameterized by multilayer perceptrons (MLPs), which can continuously represent data beyond meshgrid with powerful representation abilities. Specifically, the suggested tensor function, which maps an arbitrary coordinate to the corresponding value, can continuously represent data in an infinite real space. Parallel to discrete tensors, we develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization, and utilize MLPs to paramterize factor functions of the tensor function factorization. We theoretically justify that both low-rank and smooth regularizations are harmoniously unified in LRTFR, which leads to high effectiveness and efficiency for data continuous representation. Extensive multi-dimensional data recovery applications arising from image processing (image inpainting and denoising), machine learning (hyperparameter optimization), and computer graphics (point cloud upsampling) substantiate the superiority and versatility of our method as compared with state-of-the-art methods. Especially, the experiments beyond the original meshgrid resolution (hyperparameter optimization) or even beyond meshgrid (point cloud upsampling) validate the favorable performances of our method for continuous representation.
KW - Tensor factorization
KW - Multi-dimensional data
KW - Data recovery
UR - http://www.scopus.com/inward/record.url?scp=85179794644&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2023.3341688
DO - 10.1109/TPAMI.2023.3341688
M3 - Journal article
AN - SCOPUS:85179794644
SN - 0162-8828
VL - 46
SP - 3351
EP - 3369
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 5
ER -