Jump to content

Adversarial Examples: Difference between revisions

 
(One intermediate revision by the same user not shown)
Line 11: Line 11:


===Fast Gradient Sign Method===
===Fast Gradient Sign Method===
The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.<br>
The fast gradient sign method (FGSM) uses the sign of the gradient times a unit vector as the perturbation.<br>
This was proposed by Ian Goodfellow in his paper.<br>
This was proposed by Ian Goodfellow in his paper.<br>
===Projected Gradient Descent===
===Projected Gradient Descent===
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.<br>
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.<br>
This was proposed by Madry et al.<br>
This was proposed by Madry et al. in their 2017 paper [https://arxiv.org/abs/1706.06083 Towards Deep Learning Models Resistant to Adversarial Attacks].<br>


==Defenses==
==Defenses==