Skip to yearly menu bar Skip to main content


Poster

Generalization Bound and New Algorithm for Clean-Label Backdoor Attack

Lijia Yu · Shuang Liu · Yibo Miao · Xiao-Shan Gao · Lijun Zhang


Abstract:

The generalization bound is a crucial theoretical tool for assessing the generalizability of learning methods. Most works on generalizability rely on clean datasets, while only a few studies focus on data poisoning attacks. As far as we know, the algorithm-independent generalization bound under backdoor poison attack has not yet been established. In this paper, we fill this gap by deriving algorithm-independent generalization bounds in the clean-label backdoor attack scenario. Precisely, based on the goal of the backdoor attack,we give upper bounds for the clean and poison population errors in terms of the empirical error on the poisoned training dataset. Furthermore, based on these theoretical results, a new clean-label backdoor attack is proposed that computes the poisoning trigger by combining adversarial noise and indiscriminate poison. We demonstrate its effectiveness in a variety of settings.

Live content is unavailable. Log in and register to view live content