Skip to yearly menu bar Skip to main content


Poster

Neural Collapse meets Differential Privacy: Curious behaviors of NoisyGD with Near-Perfect Representation Learning

Chendi Wang · Yuqing Zhu · Weijie Su · Yu-Xiang Wang


Abstract:

A recent study by De et al. (2022) has reportedthat large-scale representation learning throughpre-training on a public dataset significantly enhances differentially private (DP) learning indownstream tasks, despite the high dimensionality of the feature space. To theoretically explain this phenomenon, we consider the setting of a layer-peeled model in representation learning, which results in interesting phenomena related to learned features in deep learning and transfer learning, known as Neural Collapse (NC).Within the framework of NC, we establish an errorbound indicating that the mis-classification erroris independent of dimension when the distancebetween real features and the ideal ones is smallerthan a threshold. Additionally, we reveal that DPfine-tuning is less robust compared to fine-tuningwithout DP, particularly in the presence of perturbations. This observation is supported by boththeoretical analyses and experimental results. Toenhance the robustness of NoisyGD, we suggestseveral strategies, such as feature normalizationor employing dimension reduction methods likePrincipal Component Analysis (PCA). Empirically, we demonstrate a significant improvementin testing accuracy by conducting PCA on thefinal-layer features.

Live content is unavailable. Log in and register to view live content