Abstract
Tensor decomposition is a powerful tool for extracting physically meaningful latent factors from multi‐dimensional non‐negative data, and has been an increasing interest in a variety of fields such as image processing, machine learning, and computer vision. In this paper, we propose a sparse non‐negative Tucker decomposition and completion approach for the recovery of underlying non‐negative data under incompleted and generally noisy observations. Here the underlying non‐negative tensor data is decomposed into a core tensor and several factor matrices with all entries being non‐negative and the factor matrices being sparse. The loss function is derived by the maximum likelihood estimation of the noisy observations, and the norm is employed to enhance the sparsity of the factor matrices. We establish the error bound of the estimator of the proposed model under generic noise scenarios, which is then specified to the observations with additive Gaussian noise, additive Laplace noise, and Poisson observations, respectively. Our theoretical results are better than those by existing tensor‐based or matrix‐based methods. Moreover, the minimax lower bounds are shown to be matched with the derived upper bounds up to logarithmic factors. Numerical experiments on both synthetic data and real‐world data sets demonstrate the superiority of the proposed method for non‐negative tensor data completion.
| Original language | English |
|---|---|
| Article number | e70009 |
| Journal | Numerical Linear Algebra with Applications |
| Volume | 32 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 3 Feb 2025 |
User-Defined Keywords
- error bound
- maximum likelihood estimation
- noisy observations
- sparse non-negative Tucker decomposition