Skip to yearly menu bar Skip to main content


Poster

Et Tu Certifications: Robustness Certificates Yield Better Adversarial Examples

Andrew C. Cullen · Shijie Liu · Paul Montague · Sarah Erfani · Benjamin Rubinstein


Abstract: In guaranteeing the absence of adversarial examples in an instance's neighbourhood, certification mechanisms play an important role in demonstrating neural net robustness. In this paper, we ask if these certifications can compromise the very models they help to protect? Our new Certification Aware Attack exploits certifications to produce computationally efficient norm-minimising adversarial examples $74$% more often than comparable attacks, while reducing the median perturbation norm by more than $10$%. While these attacks can be used to assess the tightness of certification bounds, they also highlight an apparent paradox---that certifications can reduce security.

Live content is unavailable. Log in and register to view live content