## On Nonconvex Decentralized Gradient DescentJinshan Zeng and Wotao YinPublished in IEEE Transactions on Signal Processing ## OverviewConsensus optimization has received considerable attention in recent years. A
number of decentralized algorithms have been proposed for convex consensus
optimization. However, on This note first analyzes the convergence of the algorithm Decentralized Gradient Descent (DGD) by Nedic and Ozdaglar applied to a consensus optimization problem with a smooth, possibly nonconvex objective function. We use a fixed step size under a proper bound and establish that the DGD iterates converge to a stationary point of a Lyapunov function, which approximates one of the original problem. The difference between each local point and their global average is subject to a bound proportional to the step size. This note then establishes similar results for the algorithm Prox-DGD, which is designed to minimize the sum of a differentiable function and a proximable function. While both functions can be nonconvex, a larger fixed step size is allowed if the proximable function is convex. ## Citation
« Back |