Multiview Orthonormalized Partial Least Squares: Regularizations and Deep Extensions

Li Wang, Ren-Cang Li, Wen-Wei Lin

Research output: Contribution to journalArticlepeer-review

Abstract

In this article, we establish a family of subspace-based learning methods for multiview learning using least squares as the fundamental basis. Specifically, we propose a novel unified multiview learning framework called multiview orthonormalized partial least squares (MvOPLSs) to learn a classifier over a common latent space shared by all views. The regularization technique is further leveraged to unleash the power of the proposed framework by providing three types of regularizers on its basic ingredients, including model parameters, decision values, and latent projected points. With a set of regularizers derived from various priors, we not only recast most existing multiview learning methods into the proposed framework with properly chosen regularizers but also propose two novel models. To further improve the performance of the proposed framework, we propose to learn nonlinear transformations parameterized by deep networks. Extensive experiments are conducted on multiview datasets in terms of both feature extraction and cross-modal retrieval. Results show that the subspace-based learning for a common latent space is effective and its nonlinear extension can further boost performance, and more importantly, one of two proposed methods with nonlinear extension can achieve better results than all compared methods.
Original languageEnglish
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
Publication statusE-pub ahead of print - 12 Oct 2021

User-Defined Keywords

  • Deep learning
  • multiview learning
  • orthonormalized partial least squares (OPLSs)
  • regularization
  • subspace learning

Fingerprint

Dive into the research topics of 'Multiview Orthonormalized Partial Least Squares: Regularizations and Deep Extensions'. Together they form a unique fingerprint.

Cite this