Skip to yearly menu bar Skip to main content


Poster

MagicLens: Self-Supervised Image Retrieval with Open-Ended Instructions

Kai Zhang · Yi Luan · Hexiang Hu · Kenton Lee · Siyuan Qiao · Wenhu Chen · Yu Su · Ming-Wei Chang


Abstract:

Image retrieval, i.e., finding desired images given a reference image, inherently encompasses rich, multi-faceted search intents that are difficult to capture solely using image-based measures. Recent work leverages text instructions to allow users to more freely express their search intents.However, existing work primarily focuses on image pairs that are visually similar and/or can be characterized by a small set of pre-defined relations.The core thesis of this paper is that text instructions can enable retrieving images with richer relations beyond visual similarity.To show this, we introduce MagicLens, a self-supervised image retrieval model that supports open-ended instructions.MagicLens is built on a key novel insight: image pairs that naturally co-occur on the same web pages contain a wide range of implicit relations (e.g., inside view of), and we can bring those implicit relations explicit by synthesizing instructions via large multimodal models (LMMs) and large language models (LLMs).Trained on 36.7M (query image, instruction, target image) triplets with rich semantic relations mined from the web, MagicLens achieves strong results on 5 benchmarks representative of various image retrieval tasks. Remarkably, it outperforms previous state-of-the-art but with a 50 times smaller model size on 3 challenging tasks including CIRCO, GeneCIS, and Domain Transfer ImageNet.Additional human analyses on a 1.4M-image unseen corpus further demonstrate the diversity of search intents supported by MagicLens.

Live content is unavailable. Log in and register to view live content