Skip to yearly menu bar Skip to main content


Poster

Extracting Training Data From Document-Based VQA Models

Francesco Pinto · Nathalie Rauschmayr · Florian Tramer · Phil Torr · Federico Tombari


Abstract:

Vision-Language Models (VLMs) have made remarkable progress in document-based Visual Question Answering (i.e., responding to queries about the contents of an input document provided as an image). In this work, we show these models can memorize responses for training samples and regurgitate them even when the relevant visual information has been removed.This includes Personal Identifiable Information (PII) repeated once in the training set, indicating these models could divulge memorised sensitive information and therefore pose a privacy risk. We quantitatively measure the extractability of information in controlled experiments and differentiate between cases where it arises from generalization capabilities or from memorization. We further investigate the factors that influence memorization across multiple state-of-the-art models and propose an effective heuristic countermeasure that empirically prevents the extractability of PII.

Live content is unavailable. Log in and register to view live content