Skip to yearly menu bar Skip to main content


Poster

Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning

Sungmin Cha · Kyunghyun Cho · Taesup Moon


Abstract:

We introduce a novel Pseudo-Negative Regularization (PNR) framework for effective continual self-supervised learning (CSSL). Our PNR leverages pseudo-negative samples obtained through model-based augmentation in a way that newly learned representations may not contradict with what have been learned in the past. Specifically, for the InfoNCE-based contrastive learning methods, we define symmetric pseudo-negatives obtained from current and previous models and utilize them in both main and regularization loss terms. Furthermore, we extend this idea to non-contrastive learning methods that do not necessarily use negative samples. The pseudo-negative in this case is defined as the outcome of previous model for differently augmented sample of the anchor and is asymmetrically applied to the regularization term. Through extensive experimental evaluations, our PNR is shown to achieve state-of-the-art representation learning performance through attaining improved plasticity and stability trade-off.

Live content is unavailable. Log in and register to view live content