TY - JOUR
T1 - On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm
AU - Tao, Min
AU - Yuan, Xiaoming
N1 - Funding Information:
Min Tao was supported by the Natural Science Foundation of China: NSFC-11301280, 11471156. Xiaoming Yuan was supported by the General Research Fund from Hong Kong Research Grants Council: 12300515.
PY - 2018/2/1
Y1 - 2018/2/1
N2 - The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in the generic setting of finding a zero point of a maximal monotone operator, and show that the condition proposed by Rockafellar can also sufficiently ensure the linear convergence rate for this generalized PPA. Indeed we show that these linear convergence rates are optimal. Both the exact and inexact versions of this generalized PPA are discussed. The motivation of considering this generalized PPA is that it includes as special cases the relaxed versions of some splitting methods that are originated from PPA. Thus, linear convergence results of this generalized PPA can be used to better understand the convergence of some widely used algorithms in the literature. We focus on the particular convex minimization context and specify Rockafellar’s condition to see how to ensure the linear convergence rate for some efficient numerical schemes, including the classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and its relaxed version, the original alternating direction method of multipliers (ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions weaker than existing ones are proposed in these particular contexts.
AB - The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in the generic setting of finding a zero point of a maximal monotone operator, and show that the condition proposed by Rockafellar can also sufficiently ensure the linear convergence rate for this generalized PPA. Indeed we show that these linear convergence rates are optimal. Both the exact and inexact versions of this generalized PPA are discussed. The motivation of considering this generalized PPA is that it includes as special cases the relaxed versions of some splitting methods that are originated from PPA. Thus, linear convergence results of this generalized PPA can be used to better understand the convergence of some widely used algorithms in the literature. We focus on the particular convex minimization context and specify Rockafellar’s condition to see how to ensure the linear convergence rate for some efficient numerical schemes, including the classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and its relaxed version, the original alternating direction method of multipliers (ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions weaker than existing ones are proposed in these particular contexts.
KW - Alternating direction method of multipliers
KW - Augmented Lagrangian method
KW - Convex programming
KW - Linear convergence rate
KW - Proximal point algorithm
UR - http://www.scopus.com/inward/record.url?scp=85023203205&partnerID=8YFLogxK
U2 - 10.1007/s10915-017-0477-9
DO - 10.1007/s10915-017-0477-9
M3 - Journal article
AN - SCOPUS:85023203205
SN - 0885-7474
VL - 74
SP - 826
EP - 850
JO - Journal of Scientific Computing
JF - Journal of Scientific Computing
IS - 2
ER -