On Markov Chain (Stochastic) Gradient Descent

Tao Sun, Yuejiao Sun, Wotao Yin

NeurIPS’18

Overview

Stochastic gradient methods are the workhorse algorithms of large-scale optimization problems in machine learning, signal processing, and other computational sciences and engineering.

This paper studies Markov chain gradient descent, a variant of stochastic gradient descent where the random samples are taken on the trajectory of a Markov chain.

Existing results of this method assume convex objectives and a reversible Markov chain and thus have their limitations. We establish new non-ergodic convergence under wider step sizes, for nonconvex problems, and for non-reversible finite-state Markov chains. Nonconvexity makes our method applicable to broader problem classes. Non-reversible finite-state Markov chains, on the other hand, can mix substatially faster.

To obtain these results, we introduce a new technique that varies the mixing levels of the Markov chains. The reported numerical results validate our contributions.

Citation

T. Sun, Y. Sun, and W. Yin, On Markov chain gradient descent, Advances in Neural Information Processing Systems (NeurIPS) 31, 9918–9927, 2018.


« Back