Skip to yearly menu bar Skip to main content


Poster

On the Duality Between Sharpness-Aware Minimization and Adversarial Training

Yihao Zhang · Hangzhou He · Jingyu Zhu · Huanran Chen · Yifei Wang · Zeming Wei


Abstract:

Adversarial Training (AT), which adversarially perturb the input samples during training, has been acknowledged as one of the most effective defenses against adversarial attacks, yet suffers from the intrinsic limitation on the decrease of clean accuracy. Instead of perturbing the samples, Sharpness-Aware Minimization (SAM) perturbs the model weights during training to find a more flat loss landscape to improve generalization. However, as SAM is designed for better clean accuracy, its effectiveness in enhancing adversarial robustness remains unexplored. In this work, considering the duality between SAM and AT, we investigate the adversarial robustness derived from SAM. Intriguingly, we find that using SAM alone can improve adversarial robustness. To understand this unexpected property of SAM, we first provide empirical and theoretical insights into how SAM can implicitly learn more robust features, and conduct comprehensive experiments to show that SAM can improve adversarial robustness notably without sacrificing any clean accuracy, shedding light on the potential of SAM to be a substitute for AT under certain requirements. Our code will be available upon publication.

Live content is unavailable. Log in and register to view live content