Adversarial Examples: Difference between revisions
No edit summary |
No edit summary |
||
Line 7: | Line 7: | ||
===Projected Gradient Descent=== | ===Projected Gradient Descent=== | ||
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range. | Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range. | ||
==Defenses== | |||
===Interval Bound Propagation=== | |||
Interval Bound Propagation (IBP)<br> | |||
[https://arxiv.org/abs/1810.12715 A paper] |
Revision as of 13:34, 6 November 2019
An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example. These were also introduced by Ian Goodfellow
Attacks
Fast Gradient Sign Method
The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.
Projected Gradient Descent
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.
Defenses
Interval Bound Propagation
Interval Bound Propagation (IBP)
A paper