Skip to yearly menu bar Skip to main content


Poster

Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning

Saber Malekmohammadi · Yaoliang Yu · YANG CAO


Abstract:

Federated Learning (FL) is a useful paradigm for learning models from the data distributed among some clients. High utility and rigorous data privacy guaranties are of the main goals of an FL system, and the latter has been tried to achieve by ensuring differential privacy (DP) during federated learning (DPFL). However, there is often heterogeneity in clients’ privacy requirements, and existing DPFL works either assume uniform privacy requirements for clients or are not applicable when server is untrusted (our considered setting). Furthermore, there is often heterogeneity in batch and/or dataset size of clients, which as shown, results in extra variation in DP noise level across clients’ model updates. Having all these sources of heterogeneity, straightforward aggregation strategies on server, e.g., assigning clients’ aggregation weights proportional to their privacy parameters (ε), which may not always be available to an untrusted server, will lead to lower utility due to high noise in the aggregated model updates on the server. We propose Robust-HDP to achieve high utility by efficiently estimating the true noise level in clients’ model updates (without sharing clients’ privacy parameters with the untrusted server) and assigning their aggregation weights such that the noise-level after aggregation is minimized. Noise-aware aggregation of Robust-HDP results in the improvement of utility, privacy and convergence speed, while being safe to the clients that may send falsified privacy parameter ε to server. Extensive experimental results on multiple benchmark datasets and our theoretical analysis confirm the effectiveness of Robust-HDP.

Live content is unavailable. Log in and register to view live content