TY - JOUR
T1 - Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm Regularization
AU - Yao, Quanming
AU - Wang, Yaqing
AU - Han, Bo
AU - Kwok, James T.
N1 - Funding Information:
BH was supported by the RGC Early Career Scheme No.22200720 and NSFC Young Scientists Fund No. 62006202.
Publisher Copyright:
© 2022 Quanming Yao, Yaqing Wang, Bo Han, James T. Kwok.
PY - 2022/4/22
Y1 - 2022/4/22
N2 - Nonconvex regularization has been popularly used in low-rank matrix
learning. However, extending it for low-rank tensor learning is still
computationally expensive. To address this problem, we develop an
efficient solver for use with a nonconvex extension of the overlapped
nuclear norm regularizer. Based on the proximal average algorithm, the
proposed algorithm can avoid expensive tensor folding/unfolding
operations. A special “sparse plus low-rank" structure is maintained
throughout the iterations, and allows fast computation of the individual
proximal steps. Empirical convergence is further improved with the use
of adaptive momentum. We provide convergence guarantees to critical
points on smooth losses and also on objectives satisfying the
Kurdyka-Lojasiewicz condition. While the optimization problem is
nonconvex and nonsmooth, we show that its critical points still have
good statistical performance on the tensor completion problem.
Experiments on various synthetic and real-world data sets show that the
proposed algorithm is efficient in both time and space and more accurate
than the existing state-of-the-art.
AB - Nonconvex regularization has been popularly used in low-rank matrix
learning. However, extending it for low-rank tensor learning is still
computationally expensive. To address this problem, we develop an
efficient solver for use with a nonconvex extension of the overlapped
nuclear norm regularizer. Based on the proximal average algorithm, the
proposed algorithm can avoid expensive tensor folding/unfolding
operations. A special “sparse plus low-rank" structure is maintained
throughout the iterations, and allows fast computation of the individual
proximal steps. Empirical convergence is further improved with the use
of adaptive momentum. We provide convergence guarantees to critical
points on smooth losses and also on objectives satisfying the
Kurdyka-Lojasiewicz condition. While the optimization problem is
nonconvex and nonsmooth, we show that its critical points still have
good statistical performance on the tensor completion problem.
Experiments on various synthetic and real-world data sets show that the
proposed algorithm is efficient in both time and space and more accurate
than the existing state-of-the-art.
KW - Low-rank tensor
KW - Nonconvex regularization
KW - Overlapped nuclear norm
KW - Proximal algorithm
KW - Proximal average algorithm
UR - https://www.jmlr.org/papers/v23/20-1368.html
UR - http://www.scopus.com/inward/record.url?scp=85131444001&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2205.03059
DO - 10.48550/arXiv.2205.03059
M3 - Journal article
AN - SCOPUS:85131444001
SN - 1532-4435
VL - 23
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
M1 - 136
ER -