Skip to yearly menu bar Skip to main content


Poster

UP2ME: Univariate Pre-training to Multivariate Fine-tuning as a General-purpose Framework for Multivariate Time Series Analysis

Yunhao Zhang · Liu Minghao · Shengyang Zhou · Junchi Yan


Abstract:

Despite the success of self-supervised pre-training in texts and images, applying it to multivariate time series (MTS) still falls behind tailored methods for tasks like forecasting, imputation and anomaly detection. In this work, we propose a general-purpose framework, named UP2ME (Univariate Pre-training to Multivariate Fine-tuning). UP2ME conducts task-agnostic pre-training when downstream tasks are unspecified. Once the task and setting (e.g. forecasting length) are determined, it gives sensible solutions with pre-trained frozen parameters, which has not been achieved before. UP2ME is further refined by fine-tuning. Technically, a univariate to multivariate paradigm is devised to address the heterogeneity of temporal and cross-channel dependencies. In univariate pre-training, univariate instances with diverse lengths are generated for Masked AutoEncoder (MAE) pre-training, discarding cross-channel dependency. The pre-trained model handles downstream tasks by formulating them into specific mask-reconstruction problems. In multivariate fine-tuning, UP2ME constructs a dependency graph among channels using the pre-trained encoder to enhance cross-channel dependency capture. Experiments on eight real-world datasets show that it achieves state-of-the-art results in forecasting and imputation, approaching task-specific performance in anomaly detection. The source code will be publicly available.

Live content is unavailable. Log in and register to view live content