More Iterations per Second, Same Quality - Why Asynchronous Algorithms may Drastically Outperform Traditional Ones

Robert Hannah and Wotao Yin

Submitted

Overview

In this paper, we consider the convergence of a very general asynchronous-parallel algorithm called ARock, which takes many well-known asynchronous algorithms as special cases (gradient descent, proximal gradient, Douglas Rachford, ADMM, etc.). In asynchronous-parallel algorithms, the computing nodes simply use the most recent information that they have access to, instead of waiting for a full update from all nodes in the system. This means that nodes do not have to waste time waiting for information, which can be a major bottleneck, especially in distributed systems.

When the system has p identical nodes that are subject to exponentially distributed communication time, asynchronous algorithms may complete Θ(ln(p)) more iterations than synchronous algorithms in a given time period (‘‘more iterations per second’’). This factor becomes even larger when the nodes or their tasks are unequal.

Although asynchronous algorithms may compute more iterations per second, there is inaccuracy associated with using outdated information. How many more iterations in total are results of this inaccuracy is still an open question. This paper answers this question by proving that as the size of the problem becomes large, the number of additional iterations that asynchronous algorithms need becomes a negligible fraction (‘‘same quality’’ of the iterations). Taking these facts together, our results provide solid evidence of the potential of asynchronous algorithms to vastly speed up certain distributed computations.

Citation

R. Hannah and W, Yin, More Iterations per Second, Same Quality - Why Asynchronous Algorithms may Drastically Outperform Traditional Ones. UCLA CAM Report 17-50, 2017.


« Back