Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Deep image classification models trained on vast amounts of web-scraped data are susceptible to data poisoning - a mechanism for backdooring models. A small number of poisoned samples seen during training can severely undermine a model's integrity during inference. Existing work considers an ef...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
07-05-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep image classification models trained on vast amounts of web-scraped data
are susceptible to data poisoning - a mechanism for backdooring models. A small
number of poisoned samples seen during training can severely undermine a
model's integrity during inference. Existing work considers an effective
defense as one that either (i) restores a model's integrity through repair or
(ii) detects an attack. We argue that this approach overlooks a crucial
trade-off: Attackers can increase robustness at the expense of detectability
(over-poisoning) or decrease detectability at the cost of robustness
(under-poisoning). In practice, attacks should remain both undetectable and
robust. Detectable but robust attacks draw human attention and rigorous model
evaluation or cause the model to be re-trained or discarded. In contrast,
attacks that are undetectable but lack robustness can be repaired with minimal
impact on model accuracy. Our research points to intrinsic flaws in current
attack evaluation methods and raises the bar for all data poisoning attackers
who must delicately balance this trade-off to remain robust and undetectable.
To demonstrate the existence of more potent defenders, we propose defenses
designed to (i) detect or (ii) repair poisoned models using a limited amount of
trusted image-label pairs. Our results show that an attacker who needs to be
robust and undetectable is substantially less threatening. Our defenses
mitigate all tested attacks with a maximum accuracy decline of 2% using only 1%
of clean data on CIFAR-10 and 2.5% on ImageNet. We demonstrate the scalability
of our defenses by evaluating large vision-language models, such as CLIP.
Attackers who can manipulate the model's parameters pose an elevated risk as
they can achieve higher robustness at low detectability compared to data
poisoning attackers. |
---|---|
DOI: | 10.48550/arxiv.2305.09671 |