Skip to yearly menu bar Skip to main content


Poster

Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention

Romain Ilbert · Ambroise Odonnat · Vasilii Feofanov · Aladin Virmaux · Giuseppe Paolo · Themis Palpanas · Ievgen Redko


Abstract:

Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting. To better understand this phenomenon, we start by studying a toy linear forecasting problem for which we show that transformers are incapable of converging to their true solution despite their high expressive power. We further identify the attention of transformers as being responsible for this low generalization capacity. Building upon this insight, we propose a shallow lightweight transformer model that successfully escapes bad local minima when optimized with sharpness-aware optimization.We empirically demonstrate that this result extends to all commonly used real-world multivariate time series datasets. In particular, our SAMformer surpasses the current state-of-the-art model TSMixer by 14.33% on average, while having ~4 times fewer parameters.

Live content is unavailable. Log in and register to view live content