5,322
edits
(→NLP) |
|||
Line 15: | Line 15: | ||
===Projected Gradient Descent=== | ===Projected Gradient Descent=== | ||
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.<br> | Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.<br> | ||
This was proposed by Madry et al.<br> | This was proposed by Madry et al. in their 2017 paper [https://arxiv.org/abs/1706.06083 Towards Deep Learning Models Resistant to Adversarial Attacks].<br> | ||
==Defenses== | ==Defenses== |