## On the Convergence of Decentralized Gradient DescentKun Yuan, Qing Ling, and Wotao Yin Published in ## OverviewConsider the consensus problem of minimizing where each is only known to one individual agent out of a connected network of agents. All the agents shall collaboratively solve this problem and obtain the solution via data exchanges only between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve data privacy. We study the decentralized gradient descent method in which each agent updates its variable , which is a local approximate to the unknown variable , by combining the average of its neighbors’ with the negative gradient step . The iteration is for each agent , where the coeffcients form a symmetric doubly stochastic matrix. As agent does not communicate to non-neighbors, only if or are neighbors. We analyze the convergence of this iteration and derive its converge rate, assuming that each is proper closed convex and lower bounded, is Lipschitz continuous with constant , and stepsize is fixed. Provided that where , the objective error at the averaged solution, , reduces at a speed of until it reaches . If are further (restricted) strongly convex, then both and each converge to the global minimizer at a linear rate until reaching an -neighborhood of . We also develop an iteration for decentralized basis pursuit and establish its linear convergence to an -neighborhood of the true unknown sparse signal. ## Citation
## Related papersPG-EXTRA for Decentralized Consensus Smooth+Proximable Optimization AsyncExtra extends PG-EXTRA to asynchronous computing and communication
« Back |