Run-and-Inspect Method for Nonconvex Optimization and Global Optimality Bounds for R-Local Minimizers

Yifan Chen, Yuejiao Sun, Wotao Yin

Submitted

Overview

Many optimization algorithms provably converge to stationary points. When the underlying problem is nonconvex, those algorithms may get trapped at local minimizers and occasionally stagnate near saddle points.

We propose the Run-and-Inspect Method, which adds an “inspect” phase to existing algorithms that helps escape from local minimizers and stationary points that are not globally optimal. The inspection samples a set of points in a radius R around the current point. When a sample point yields a sufficient decrease in the objective, we move there and resume an existing algorithm. If no sufficient decrease is found, the current point is called an approximate R-local minimizer. We show that an R-local minimizer is globally optimal, up to an error depending on R, if the objective function can be implicitly decomposed into a smooth convex function plus a restricted function that is possibly nonconvex, nonsmooth.

For high-dimensional nonconvex problems, we introduce blockwise inspections to overcome the curse of dimensionality while still maintaining optimality bounds up to a factor equal to the number of blocks.

 

Our method performs well on a set of artificial and realistic nonconvex problems by coupling with gradient descent, coordinate descent, EM, and prox-linear algorithms.

Citation

Y. Chen, Y. Sun, and W. Yin, Run-and-Inspect Method for Nonconvex Optimization and Global Optimality Bounds for R-Local Minimizers. UCLA CAM Report 17-67, 2017.


« Back