TY - JOUR
T1 - Multiview Orthonormalized Partial Least Squares
T2 - Regularizations and Deep Extensions
AU - Wang, Li
AU - Li, Ren Cang
AU - Lin, Wen Wei
N1 - Funding information:
The work of Li Wang was supported by NSF under Grant DMS-2009689. The work of Ren-Cang Li was supported by NSF under Grant DMS-1719620 and Grant DMS-2009689. The work of WenWei Lin was supported in part by the Ministry of Technology and Science (MoST) under Grant 106-2628-M-009-004, in part by the National Center for Theoretical Science (NCTS), and in part by the ST Yau Centre in Taiwan. (Corresponding author: Li Wang.)
Publisher copyright:
© 2021 IEEE
PY - 2023/8
Y1 - 2023/8
N2 - In this article, we establish a family of subspace-based learning methods for multiview learning using least squares as the fundamental basis. Specifically, we propose a novel unified multiview learning framework called multiview orthonormalized partial least squares (MvOPLSs) to learn a classifier over a common latent space shared by all views. The regularization technique is further leveraged to unleash the power of the proposed framework by providing three types of regularizers on its basic ingredients, including model parameters, decision values, and latent projected points. With a set of regularizers derived from various priors, we not only recast most existing multiview learning methods into the proposed framework with properly chosen regularizers but also propose two novel models. To further improve the performance of the proposed framework, we propose to learn nonlinear transformations parameterized by deep networks. Extensive experiments are conducted on multiview datasets in terms of both feature extraction and cross-modal retrieval. Results show that the subspace-based learning for a common latent space is effective and its nonlinear extension can further boost performance, and more importantly, one of two proposed methods with nonlinear extension can achieve better results than all compared methods.
AB - In this article, we establish a family of subspace-based learning methods for multiview learning using least squares as the fundamental basis. Specifically, we propose a novel unified multiview learning framework called multiview orthonormalized partial least squares (MvOPLSs) to learn a classifier over a common latent space shared by all views. The regularization technique is further leveraged to unleash the power of the proposed framework by providing three types of regularizers on its basic ingredients, including model parameters, decision values, and latent projected points. With a set of regularizers derived from various priors, we not only recast most existing multiview learning methods into the proposed framework with properly chosen regularizers but also propose two novel models. To further improve the performance of the proposed framework, we propose to learn nonlinear transformations parameterized by deep networks. Extensive experiments are conducted on multiview datasets in terms of both feature extraction and cross-modal retrieval. Results show that the subspace-based learning for a common latent space is effective and its nonlinear extension can further boost performance, and more importantly, one of two proposed methods with nonlinear extension can achieve better results than all compared methods.
KW - Deep learning
KW - multiview learning
KW - orthonormalized partial least squares (OPLSs)
KW - regularization
KW - subspace learning
UR - http://www.scopus.com/inward/record.url?scp=85117339306&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2021.3116784
DO - 10.1109/TNNLS.2021.3116784
M3 - Journal article
SN - 2162-237X
VL - 34
SP - 4371
EP - 4385
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 8
ER -