Deep Learning: Difference between revisions

Line 780: Line 780:
For l2 p>=2, <math>p \geq 2</math>, <math>vol(A(\epsilon g d p)) \geq 1 - \frac{\exp(-2 \pi \epsilon^2)}{2 \pi \epsilon}</math>   
For l2 p>=2, <math>p \geq 2</math>, <math>vol(A(\epsilon g d p)) \geq 1 - \frac{\exp(-2 \pi \epsilon^2)}{2 \pi \epsilon}</math>   


For p=2, the diameter of the hypercube is <math>O(\sqrt{d})</math>.   
For p=2, the diameter of the hypercube is <math>O(\sqrt{d})</math> so we should use <math>\epsilon \approx O(\sqrt{d})</math>.   
For p=infinity, the diameter of the cube is 1.
For p=infinity, the diameter of the cube is 1 so we should pick a constant <math>\epsilon</math>.


This shows if you pick a random sample, there is a high probability of it being misclassified or there being an adversarial example within epsilon.
This shows if you pick a random sample, there is a high probability of it being misclassified or there being an adversarial example within epsilon.