On the convergence of gradient descent for robust functional linear regression

Cheng Wang, Jun Fan*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Functional data analysis offers a set of statistical methods concerned with extracting insights from intrinsically infinite-dimensional data and has attracted considerable amount of attentions in the past few decades. In this paper, we study robust functional linear regression model with a scalar response and a functional predictor in the framework of reproducing kernel Hilbert spaces. A gradient descent algorithm with early stopping is introduced to solve the corresponding empirical risk minimization problem associated with robust loss functions. By appropriately selecting the early stopping rule and the scaling parameter of the robust losses, the convergence of the proposed algorithm is established when the response variable is bounded or satisfies a moment condition. Explicit learning rates with respect to both estimation and prediction error are provided in terms of regularity of the regression function and eigenvalue decay rate of the integral operator induced by the reproducing kernel and covariance function.

Original languageEnglish
Article number101858
Number of pages20
JournalJournal of Complexity
Volume84
Early online date30 Apr 2024
DOIs
Publication statusPublished - Oct 2024

Scopus Subject Areas

  • Algebra and Number Theory
  • Statistics and Probability
  • Numerical Analysis
  • General Mathematics
  • Control and Optimization
  • Applied Mathematics

User-Defined Keywords

  • Gradient descent algorithm
  • Integral operator
  • Learning rate
  • Learning theory
  • Robust functional linear regression

Fingerprint

Dive into the research topics of 'On the convergence of gradient descent for robust functional linear regression'. Together they form a unique fingerprint.

Cite this