Skip to yearly menu bar Skip to main content


Poster

An Online Optimization Perspective on First-Order and Zero-Order Decentralized Nonsmooth Nonconvex Stochastic Optimization

Emre Sahinoglu · Shahin Shahrampour


Abstract: We investigate the finite-time analysis of finding \goldstat points for nonsmooth nonconvex objectives in decentralized stochastic optimization. A set of agents aim at minimizing a global function using only their local information by interacting over a network. We present a novel algorithm, called Multi Epoch Decentralized Online Learning (ME-DOL), for which we establish the sample complexity in various settings. First, using a recently proposed online-to-nonconvex technique, we show that our algorithm recovers the optimal convergence rate of smooth nonconvex objectives. We further extend our analysis to the nonsmooth setting, building on properties of randomized smoothing and Goldstein-subdifferential sets. We establish the rate of $O(\delta^{-1}\epsilon^{-3})$, which to the best of our knowledge is the first finite-time guarantee for general decentralized nonsmooth nonconvex objectives in the first-order oracle setting, matching its optimal centralized counterpart. We further prove the same rate for the zero-order oracle setting without using variance reduction.

Live content is unavailable. Log in and register to view live content