Proximal average approximated incremental gradient method for composite penalty regularized empirical risk minimization

Yiu Ming Cheung*, Jian Lou

*Corresponding author for this work

Research output: Contribution to conferenceConference paperpeer-review

Abstract

Proximal average (PA) is an approximation technique proposed recently to handle non-smooth composite regularizer in empirical risk minimization problem. For nonsmooth composite regularizer, it is often difficult to directly derive the corresponding proximal update when solving with popular proximal update. While traditional approaches resort to complex splitting methods like ADMM, proximal average provides an alternative, featuring the tractability of implementation and theoretical analysis. Nevertheless, compared to SDCA-ADMM and SAG-ADMM which are examples of ADMM-based methods achieving faster convergence rate and low per-iteration complexity, existing PA-based approaches either converge slowly (e.g. PA-ASGD) or suffer from high per-iteration cost (e.g. PA-APG). In this paper, we therefore propose a new PA-based algorithm called PA-SAGA, which is optimal in both convergence rate and per-iteration cost, by incorporating into incremental gradient-based framework.

Original languageEnglish
Pages205-220
Number of pages16
Publication statusPublished - 2015
Event7th Asian Conference on Machine Learning, ACML 2015 - Hong Kong, Hong Kong
Duration: 20 Nov 201522 Nov 2015

Conference

Conference7th Asian Conference on Machine Learning, ACML 2015
Country/TerritoryHong Kong
CityHong Kong
Period20/11/1522/11/15

Scopus Subject Areas

  • Software
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Proximal average approximated incremental gradient method for composite penalty regularized empirical risk minimization'. Together they form a unique fingerprint.

Cite this