Skip to yearly menu bar Skip to main content


Poster

Multigroup Robustness

Lunjia Hu · Charlotte Peale · Judy Hanwen Shen


Abstract:

To address the shortcomings of real-world datasets, robust learning algorithms have been designed to overcome arbitrary and indiscriminate data corruption. However, practical processes of gathering data may lead to patterns of data corruption that are localized to specific partitions of the training dataset. Motivated by critical applications where the learned model is deployed to make predictions about people from a rich collection of overlapping subpopulations, we initiate the study of \emph{multigroup robust} algorithms whose robustness guarantees for each subpopulation only degrade with the amount of data corruption \emph{inside} that subpopulation. When the data corruption is not distributed uniformly over subpopulations, our algorithms provide more meaningful robustness guarantees than standard guarantees that are oblivious to how the data corruption and the affected subpopulations are related. Our techniques establish a new connection between multigroup fairness and robustness.

Live content is unavailable. Log in and register to view live content