Skip to yearly menu bar Skip to main content


Poster

Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models

Yifei Ming · Sharon Li


Abstract:

Pre-trained contrastive vision-language models have demonstrated remarkable performance across a wide range of tasks. However, they often struggle on fine-trained datasets with categories not adequately represented during pre-training, which makes adaptation necessary. Recent works have shown promising results by utilizing samples from web-scale databases for retrieval-augmented adaptation, especially in low-data regimes. Despite the empirical success, understanding how retrieval impacts the adaptation of vision-language models remains an open research question. In this work, we adopt a reflective perspective by presenting a systematic study to understand the roles of key components in retrieval-augmented adaptation. We unveil new insights on uni-modal and cross-modal retrieval and highlight the criticalrole of logit ensemble for effective adaptation. We further present theoretical underpinnings that directly support our empirical observations.

Live content is unavailable. Log in and register to view live content