Deep Learning: Difference between revisions

Line 724: Line 724:
Just take the expectation over the randomization.
Just take the expectation over the randomization.


===Are adversarial examples inevitable?===
==Are adversarial examples inevitable?==
====Notations====
====Notations====
<math>S^{d-1} = \{x \in \mathbb{R} \mid \Vert x \Vert = 1\}</math>   
<math>S^{d-1} = \{x \in \mathbb{R} \mid \Vert x \Vert = 1\}</math>   
Line 789: Line 789:
It depends on the data distribution, threat model, and hypothesis class.
It depends on the data distribution, threat model, and hypothesis class.


===Provable Defenses===
==Provable Defenses==
There are 3 types of Lp defenses:
There are 3 types of Lp defenses:
* Curvature-based defenses
* Curvature-based defenses
Line 850: Line 850:
From this, we get Perceptual Projected Gradient Descent (PPGD) and Lagrangian Perceptual Attacks (LPA).   
From this, we get Perceptual Projected Gradient Descent (PPGD) and Lagrangian Perceptual Attacks (LPA).   
We also get Perceptual Adversarial Training (PAT).
We also get Perceptual Adversarial Training (PAT).
==Poisoning Attacks and Defenses==
Another type of adversarial robustness.
So far, we train on training data using SGD and do adversarial attacks at inference time.
However, deep learning models require a large amount of training data which makes it hard to manually verify or trust training samples. 
In this case, an adversary can do ''data poisoning'' by perturbing some of the training samples.
;Question: What is the goal?


==Misc==
==Misc==