Adversarial Examples: Difference between revisions
No edit summary |
|||
Line 9: | Line 9: | ||
==Defenses== | ==Defenses== | ||
Most defenses focus on generating adversarial examples during training time and training on those adversarial examples.<br> | |||
Below are some alternatives to this approach. | |||
===Interval Bound Propagation=== | ===Interval Bound Propagation=== | ||
Interval Bound Propagation (IBP)<br> | Interval Bound Propagation (IBP)<br> | ||
[https://arxiv.org/abs/1810.12715 A paper] | [https://arxiv.org/abs/1810.12715 A paper] |
Revision as of 13:35, 6 November 2019
An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example. These were also introduced by Ian Goodfellow
Attacks
Fast Gradient Sign Method
The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.
Projected Gradient Descent
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.
Defenses
Most defenses focus on generating adversarial examples during training time and training on those adversarial examples.
Below are some alternatives to this approach.
Interval Bound Propagation
Interval Bound Propagation (IBP)
A paper