Backdoor
Attacks that subtly affect a model to achieve an adversarial goal, while maintaining the benign performance
BackdoorAttack
BackdoorAttack (after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, after_cancel_backward=None, after_backward=None, before_step=None, after_cancel_step=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None)
A Callback that affects the training process to install a backdoor. Also allows the measuring of the attack’s success
BackdoorAttack._asr_dl
BackdoorAttack._asr_dl ()
Returns a DataLoader used to measure the ASR (attack success rate)
DataPoisoningAttack
DataPoisoningAttack (test_only=False, poison_fraction=0.1)
A BackdoorAttack that installs the backdoor by altering a small portion of the training dataset
DataPoisoningAttack._poison
DataPoisoningAttack._poison (data_to_poison:fastai.data.core.Datasets)
| Type | Details | |
|---|---|---|
| data_to_poison | Datasets | The prtion of the clean training data that will be replaced by poison. Could be used to construct the poison data |
| Returns | Datasets | A dataset of poison data |