Skip to yearly menu bar Skip to main content


Poster

Conformal Prediction for AI Agents

Drew Prinster · Samuel Stanton · Anqi Liu · Suchi Saria


Abstract:

As machine learning gains widespread adoption, scientists and engineers are increasingly seeking means to automate data collection with tools like black-box optimization and active learning, transforming machine learning systems from passive observers to active agents. Accurately quantifying and controlling the risk these agents incur is a major challenge, as the data they choose to collect is intentionally distribution-shifted from their training data. Conformal inference has emerged as a promising approach to risk quantification in practice, but existing variants either fail to accommodate a sequence of data-dependent shifts, or do not fully exploit the fact that agent-induced shift is known and under our control. In this work we show that conformal prediction can theoretically be extended to \textit{any} known joint distribution, not just exchangeable or quasi-exchangeable ones, although it is exceedingly impractical to compute in the most general case. We also show that the special case of a series of agent-induced covariate shifts is computationally tractable, which we validate with empirical results on synthetic black-box optimization and active learning tasks.

Live content is unavailable. Log in and register to view live content