Skip to yearly menu bar Skip to main content


Poster

Improved Differentially Private and Lazy Online Convex Optimization

Naman Agarwal · Satyen Kale · Karan Singh · Abhradeep Guha Thakurta


Abstract:

We design differentially private regret-minimizing algorithms in the online convex optimization (OCO) framework. The resulting regret guarantees improve upon previous results in terms their dependence of dimension. Additionally, unlike previous results, our algorithms and analysis do not require smoothness, thus yielding the first private regret bounds with an optimal leading-order term for non-smooth loss functions. Our results provide the best known rates for DP-OCO in all practical regimes of the privacy parameter, barring when it is exceptionally small. The principal innovation in our algorithm design is the use of sampling from strongly log-concave densities which satisfy the Log-Sobolev Inequality. The resulting concentration of measure allows us to obtain a better trade-off for the dimension factors than prior work, leading to improved results. Following previous works on DP-OCO, the proposed algorithm explicitly limits the number of switches via rejection sampling. Thus, independently of privacy constraints, the algorithm also provides improved results for online convex optimization with a switching budget.

Live content is unavailable. Log in and register to view live content