On the Convergence of Decentralized Gradient Descent

Kun Yuan, Qing Ling, and Wotao Yin

Published in SIAM Journal on Optimization


Consider the consensus problem of minimizing f(x)=sum_{i=1}^n f_i(x) where each f_i is only known to one individual agent i out of a connected network of n agents. All the agents shall collaboratively solve this problem and obtain the solution via data exchanges only between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve data privacy.

We study the decentralized gradient descent method in which each agent i updates its variable x_{(i)}, which is a local approximate to the unknown variable x, by combining the average of its neighbors’ with the negative gradient step -alpha nabla f_i(x_{(i)}). The iteration is

x_{(i)}(k+1) = sum_{j} w_{ij} x_{(j)}(k) - alpha nabla f_i(x_{(i)}(k)),

for each agent i, where the coeffcients w_{ij} form a symmetric doubly stochastic matrix. As agent i does not communicate to non-neighbors, w_{ij}not=0 only if i=j or i,j are neighbors.

We analyze the convergence of this iteration and derive its converge rate, assuming that each f_i is proper closed convex and lower bounded, nabla f_i is Lipschitz continuous with constant L_{f_i}, and stepsize alpha is fixed. Provided that alpha < O(1/L_h) where L_h=max_i{L_{f_i}}, the objective error at the averaged solution, f(frac{1}{n}sum_i x_{(i)}(k))-f^*, reduces at a speed of O(1/k) until it reaches O(alpha). If f_i are further (restricted) strongly convex, then both frac{1}{n}sum_i x_{(i)}(k) and each x_{(i)}(k) converge to the global minimizer x^* at a linear rate until reaching an O(alpha)-neighborhood of x^*. We also develop an iteration for decentralized basis pursuit and establish its linear convergence to an O(alpha)-neighborhood of the true unknown sparse signal.


K. Yuan, Q. Ling, and W. Yin, On the Convergence of Decentralized Gradient Descent, SIAM Journal on Optimization, 26(3):1835-1854, 2016. DOI: 10.1137/130943170

Related papers

« Back