On the global and linear convergence of the generalized alternating direction method of multipliers (ADMM)

W. Deng and W. Yin

Published in Journal of Scientific Computing. Computing Reviews’ 21st Annual Best of Computing.

Overview

A large number of optimization problems can be reformulated into the form of

min_{x,y} f(x)+g(y),quadmbox{s.t.}~Ax+By = b,

where f and g are extended-value convex functions. On problems of this form, a very effective approach is the alternating direction method of multipliers (ADM or ADMM), which solves a sequence of subproblems involving f and g one at a time. However, its effectiveness has not been matched by a provably fast rate of convergence; only sublinear rates such as O(1/k) and O(1/k^2) were established in the literature.

In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous gradient. This paper shows that global and linear convergence can be guaranteed in this case, along with certain rank assumptions on A and B. The derived rate of convergence also provides some theoretical guidance for optimizing the ADM parameters.

The above results also apply to generalized ADMs, which allow their subproblems to be solved faster and less exactly in certain manners so that the total running time is reduced and coding is easier. This paper covers the following ADM subproblems, in addition to the classical one:

  • prox-linear: good for f or g with easy proximal and will avoid the inverse of A^TA or B^TB

  • gradient descent: use nabla f or nabla g instead of their proximals

  • fast approximate A or B: invert bar{A}^Tbar{A}, where bar{A}approx A, instead of A^TA; for example, invert a standard discrete Fourier operator instead of one defined on a nonstandard grid

Citation

W. Deng and W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers, Journal of Scientific Computing 66(3):889-916, 2016. DOI: 10.1007/s10915-015-0048-x


« Back