Adversarial Examples: Difference between revisions

From David's Wiki
(Created page with "An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example.")
 
No edit summary
Line 1: Line 1:
An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example.
An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example.
These were also introduced by Ian Goodfellow
==Attacks==
===Fast Gradient Sign Method===
The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.
===Projected Gradient Descent===
Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.

Revision as of 03:40, 5 November 2019

An adversarial example tries to trick a neural network by applying a small worst-case perturbation to a real example. These were also introduced by Ian Goodfellow

Attacks

Fast Gradient Sign Method

The fast gradient sign method (FGSM) using the sign of the gradient times a unit vector as the perturbation.

Projected Gradient Descent

Basic idea: Do gradient descent. If you go too far from your example, project it back into your perturbation range.