Skip to yearly menu bar Skip to main content


Poster

Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms

Yuchen Li · Laura Balzano · Deanna Needell · Hanbaek Lyu


Abstract: We are interested in analyzing inexact Riemannian gradient descent (RGD) where Riemannian gradients and retractions are inexactly (and cheaply) computed. Our focus is on understanding when inexact RGD converges and what is the complexity in the general nonconvex and constrained setting. We answer these questions in a general framework of tangential Block Majorization-Minimization (tBMM). We establish that tBMM converges to an $\epsilon$-stationary point within $O(\epsilon^{-2})$ iterations. Under a mild assumption, the results still hold when the subproblem is solved inexactly in each iteration provided the total optimality gap is bounded. Our general analysis applies to a wide range of classical algorithms with Riemannian constraints including inexact RGD and proximal gradient method on Stiefel manifolds. We numerically validate that tBMM converges faster than the classical algorithm when applied to low-rank matrix recovery problems.

Live content is unavailable. Log in and register to view live content