Adversarial Examples: Difference between revisions

From David's Wiki
No edit summary
No edit summary
Line 1: Line 1:
An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example.
An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example.<br>
These were also introduced by Ian Goodfellow
These were also introduced by Ian Goodfellow.<br>
The first two papers introducing adversarial examples are:
* [https://arxiv.org/abs/1412.6572 Explaining and Harnessing Adversarial Examples] by Ian Goodfellow et al. in 2014
* [https://arxiv.org/abs/1312.6199 Intriguing properties of neural networks] by Szegedy et al. in 2014


==Attacks==
==Attacks==
===L-BFGS===
Limited memory Broyden-FletcherGoldfarb-Shanno (L-BFGS)<br>
This is used by Szegedy et al in their paper.
===Fast Gradient Sign Method===
===Fast Gradient Sign Method===
The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.
The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.<br>
This was proposed by Ian Goodfellow in his paper.<br>
===Projected Gradient Descent===
===Projected Gradient Descent===
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.<br>
This was proposed by Madry et al.<br>


==Defenses==
==Defenses==

Revision as of 20:38, 15 November 2019

An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example.
These were also introduced by Ian Goodfellow.
The first two papers introducing adversarial examples are:

Attacks

L-BFGS

Limited memory Broyden-FletcherGoldfarb-Shanno (L-BFGS)
This is used by Szegedy et al in their paper.

Fast Gradient Sign Method

The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.
This was proposed by Ian Goodfellow in his paper.

Projected Gradient Descent

Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.
This was proposed by Madry et al.

Defenses

Most defenses focus on generating adversarial examples during training time and training on those adversarial examples.
Below are some alternatives to this approach.

Interval Bound Propagation

Interval Bound Propagation (IBP)
A paper

NLP