Skip to yearly menu bar Skip to main content


Poster

Online Learning and Information Exponents: The Importance of Batch size & Time/Complexity Tradeoffs

Luca Arnaboldi · Yatin Dandi · FLORENT KRZAKALA · Bruno Loureiro · Luca Pesce · Ludovic Stephan


Abstract: We study the impact of the batch size $n_b$ on the iteration time $T$ of training two-layer neural networks with one-pass stochastic gradient descent (SGD) on multi-index target functions of isotropic covariates. We characterize the optimal batch size minimizing the iteration time as a function of the hardness of the target, as characterized by the information exponents.We show that performing gradient updates with large batches $n_b \lesssim d$ minimize the training time without changing the total sample complexity. However, larger batch sizes are detrimental for improving the time complexity of SGD. We provably overcome this fundamental limitation via a different training protocol, \textit{Correlation loss SGD}, which suppresses the auto-correlation terms in the loss function. We show that one can track the training progress by a system of low dimensional ordinary differential equations (ODEs). Finally, we validate our theoretical results with numerical experiments.

Live content is unavailable. Log in and register to view live content