Skip to yearly menu bar Skip to main content


Poster

Long-tail Learning with Foundation Model: Heavy Fine-tuning Hurts

Jiang-Xin Shi · Tong Wei · Zhi Zhou · Jie-Jing Shao · Xin-Yan Han · Yu-Feng Li


Abstract:

The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models. Nonetheless, how fine-tuning impacts performance in long-tail learning was not explicitly quantified. In this paper, we disclose that heavy fine-tuning may even lead to non-negligible performance deterioration on tail classes, and lightweight fine-tuning is more effective. The reason is attributed to inconsistent class conditions caused by heavy fine-tuning. With the observation above, we develop a low-complexity and accurate long-tail learning algorithms LIFT with the goal of facilitating fast prediction and compact models by adaptive lightweight fine-tuning. Experiments clearly verify that both the training time and the learned parameters are significantly reduced with more accurate predictive performance compared with state-of-the-art approaches.

Live content is unavailable. Log in and register to view live content