TY - JOUR
T1 - Low-Rank Tensor Learning by Generalized Nonconvex Regularization
AU - Xia, Sijia
AU - Ng, Michael K.
AU - Zhang, Xiongjun
N1 - The research of M. K. Ng was supported in part by the Hong Kong Research Grant Council GRF 17300021, C7004-21GF and Joint NSFC-RGC N-HKU76921. The research of X. Zhang was supported in part by the National Natural Science Foundation of China under Grant No. 12171189, Hubei Provincial Natural Science Foundation of China under Grant No. 2025AFB966, and the Fundamental Research Funds for the Central Universities under Grant No. CCNU24ai002.
Publisher Copyright:
© 1979-2012 IEEE.
PY - 2026/1/5
Y1 - 2026/1/5
N2 - In this paper, we study the problem of low-rank tensor learning, where only a few of training samples are observed and the underlying tensor has a low-rank structure. The existing methods are based on the sum of nuclear norms of unfolding matrices of a tensor, which may be suboptimal. In order to explore the low-rankness of the underlying tensor effectively, we propose a nonconvex model based on transformed tensor nuclear norm for low-rank tensor learning. Specifically, a family of nonconvex functions are employed onto the singular values of all frontal slices of a tensor in the transformed domain to characterize the low-rankness of the underlying tensor. An error bound between the stationary point of the nonconvex model and the underlying tensor is established under restricted strong convexity on the loss function (such as least squares loss and logistic regression) and suitable regularity conditions on the nonconvex penalty function. By reformulating the nonconvex function into the difference of two convex functions, a proximal majorization-minimization (PMM) algorithm is designed to solve the resulting model. Then the global convergence and convergence rate of PMM are established under very mild conditions. Numerical experiments are conducted on tensor completion and binary classification to demonstrate the effectiveness of the proposed method over other state-of-the-art methods.
AB - In this paper, we study the problem of low-rank tensor learning, where only a few of training samples are observed and the underlying tensor has a low-rank structure. The existing methods are based on the sum of nuclear norms of unfolding matrices of a tensor, which may be suboptimal. In order to explore the low-rankness of the underlying tensor effectively, we propose a nonconvex model based on transformed tensor nuclear norm for low-rank tensor learning. Specifically, a family of nonconvex functions are employed onto the singular values of all frontal slices of a tensor in the transformed domain to characterize the low-rankness of the underlying tensor. An error bound between the stationary point of the nonconvex model and the underlying tensor is established under restricted strong convexity on the loss function (such as least squares loss and logistic regression) and suitable regularity conditions on the nonconvex penalty function. By reformulating the nonconvex function into the difference of two convex functions, a proximal majorization-minimization (PMM) algorithm is designed to solve the resulting model. Then the global convergence and convergence rate of PMM are established under very mild conditions. Numerical experiments are conducted on tensor completion and binary classification to demonstrate the effectiveness of the proposed method over other state-of-the-art methods.
KW - error bound
KW - Low-rank tensor learning
KW - nonconvex regularization
KW - proximal majorization-minimization
KW - transformed tensor SVD
UR - https://www.scopus.com/pages/publications/105027306729
U2 - 10.1109/TPAMI.2025.3650712
DO - 10.1109/TPAMI.2025.3650712
M3 - Journal article
AN - SCOPUS:105027306729
SN - 0162-8828
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -