On Nonconvex Decentralized Gradient Descent

Jinshan Zeng and Wotao Yin

Submitted

Overview

Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for convex consensus optimization. However, on nonconvex consensus optimization, our understanding to the behavior of these algorithms is limited.

This note first analyzes the convergence of the algorithm Decentralized Gradient Descent (DGD) by Nedic and Ozdaglar applied to a consensus optimization problem with a smooth, possibly nonconvex objective function. We use a fixed step size under a proper bound and establish that the DGD iterates converge to a stationary point of a Lyapunov function, which approximates one of the original problem. The difference between each local point and their global average is subject to a bound proportional to the step size.

This note then establishes similar results for the algorithm Prox-DGD, which is designed to minimize the sum of a differentiable function and a proximable function. While both functions can be nonconvex, a larger fixed step size is allowed if the proximable function is convex.

Citation

J. Zeng and W. Yin, On Nonconvex Decentralized Gradient Descent, UCLA CAM 16-59, 2016.

Related papers


« Back