Visual Learning and Recognition: Difference between revisions

Line 138: Line 138:
* FC/ReLu
* FC/ReLu
* FC/Normalization/Loss
* FC/Normalization/Loss
===VGGNet===
ILSVRC 2014 2nd place
This is a sequence of deeper networks trained progressively. 
They replace large receptive fields with successive 3x3 conv + ReLU layers. 
A single 7x7 conv layer with C-dim input and C-dim output would need <math>49 \times C^2</math> weights. 
Three <math>3\times 3</math> conv layers only need <math>27 \times C^2</math> weights.


==Will be on the exam==
==Will be on the exam==