Skip to yearly menu bar Skip to main content


Poster

Small-loss Adaptive Regret for Online Convex Optimization

Wenhao Yang · Wei Jiang · Yibo Wang · Ping Yang · Yao Hu · Lijun Zhang


Abstract: To deal with changing environments, adaptive regret has been proposed to minimize the regret over every interval. Previous studies have established a small-loss adaptive regret bound for general convex functions under the smoothness condition, offering the advantage of being much tighter than minimax rates for benign problems. However, it remains unclear whether similar bounds are attainable for other types of convex functions, such as exp-concave and strongly convex functions. In this paper, we first propose a novel algorithm that achieves a small-loss adaptive regret bound for exp-concave and smooth function. Subsequently, to address the limitation that existing algorithms can only handle one type of convex functions, we further design a universal algorithm capable of delivering small-loss adaptive regret bounds for general convex, exp-concave, and strongly convex functions simultaneously. That is challenging because the universal algorithm follows the meta-expert framework, and we need to ensure that upper bounds for both meta-regret and expert-regret are of small-loss types. Moreover, we introduce two additional enhancements to the proposed algorithms: (i) we provide a novel analysis demonstrating that they are also equipped with minimax adaptive regret bounds when functions are non-smooth; (ii) we introduce novel surrogate losses to reduce the number of gradient queries per round from $O(\log^2 T)$ to $1$.

Live content is unavailable. Log in and register to view live content