Skip to yearly menu bar Skip to main content


Poster

Differentially Private Domain Adaptation with Theoretical Guarantees

Raef Bassily · Corinna Cortes · Anqi Mao · Mehryar Mohri


Abstract: In many applications, the labeled data at the learner's disposal is subject to privacy constraints and is relatively limited. To derive a more accurate predictor for the target domain, it is often beneficial to leverage publicly available labeled data from an alternative domain, somewhat close to the target domain. This is the modern problem of supervised domain adaptation from a public source to a private target domain. We present two $(\epsilon, \delta)$-differentially private adaptation algorithms for supervised adaptation, for which we make use of a general optimization problem, recently shown to benefit from favorable theoretical learning guarantees. Our first algorithm is designed for regression with linear predictors and shown to solve a convex optimization problem. Our second algorithm is a more general solution for loss functions that may be non-convex but Lipschitz and smooth. While our main objective is a theoretical analysis, we also report theresults of several experiments first demonstrating that thenon-private versions of our algorithms outperform adaptation baselinesand next showing that, for larger values of the target sample size or$\epsilon$, the performance of our private algorithms remains close to thatof the non-private formulation.

Live content is unavailable. Log in and register to view live content