Adversarial Examples: Difference between revisions

From David's Wiki
No edit summary
Line 14: Line 14:
Interval Bound Propagation (IBP)<br>
Interval Bound Propagation (IBP)<br>
[https://arxiv.org/abs/1810.12715 A paper]
[https://arxiv.org/abs/1810.12715 A paper]
==NLP==
* [https://arxiv.org/abs/1901.06796 Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey]

Revision as of 20:14, 15 November 2019

An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example. These were also introduced by Ian Goodfellow

Attacks

Fast Gradient Sign Method

The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.

Projected Gradient Descent

Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.

Defenses

Most defenses focus on generating adversarial examples during training time and training on those adversarial examples.
Below are some alternatives to this approach.

Interval Bound Propagation

Interval Bound Propagation (IBP)
A paper

NLP