Skip to yearly menu bar Skip to main content


Poster

A decoder-only foundation model for time-series forecasting

Abhimanyu Das · Weihao Kong · Rajat Sen · Yichen Zhou


Abstract:

Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a decoder style attention model with input patching, using a large time-series corpus comprising both real-world and synthetic datasets. Experiments on a diverse set of previously unseen forecasting datasets suggests that the model can yield accurate zero-shot forecasts across different domains, forecasting horizons and temporal granularities.

Live content is unavailable. Log in and register to view live content