Skip to yearly menu bar Skip to main content


Poster

Differentially Private Post-Processing for Fair Regression

Ruicheng Xian · Qiaobo Li · Gautam Kamath · Han Zhao


Abstract:

This paper describes a differentially private post-processing algorithm for learning fair regressors under the notion of statistical parity, motivated by privacy concerns of machine learning models trained on sensitive data and their potential for propagating historical biases. Our algorithm can be applied to post-process any given regressor to improve fairness by remapping its outputs. It consists of three steps: first, the output distributions are estimated privately via histogram density estimation and the Laplace mechanism, then their Wasserstein barycenter is computed, and the optimal transports to the barycenter are used for post-processing to ensure fairness. We analyze the sample complexity of our algorithm and provide fairness guarantee, which reveal a trade-off between the statistical bias and variance induced from the choice of the number of bins in the histogram, where using less bins always favors fairness at the expense of error.

Live content is unavailable. Log in and register to view live content