StyleGAN: Difference between revisions

552 bytes added ,  4 March 2020
Line 21: Line 21:
It consists of 9 convolution blocks, one for each resolution from <math>4^2</math> to <math>1024^2</math>.<br>
It consists of 9 convolution blocks, one for each resolution from <math>4^2</math> to <math>1024^2</math>.<br>
Each block consists of upsample, 3x3 convolution, AdaIN, 3x3 convolution, AdaIN.
Each block consists of upsample, 3x3 convolution, AdaIN, 3x3 convolution, AdaIN.
After each convolution layer, a gaussian noise with learned variance is added to the feature maps.
After each convolution layer, a gaussian noise with learned variance (block B in the figure) is added to the feature maps.


====Adaptive Instance Normalization====
====Adaptive Instance Normalization====
Each AdaIN block takes as input the latent style <math>w</math> and the feature map <math>x</math>.<br>
An affine layer (fully connected with no activation, block A in the figure) converts the style to a mean <math>y_{b,i}</math> and standard deviation <math>y_{s,i}</math>.<br>
Then the feature map is shifted and scaled to have this mean and standard deviation.<br>
* <math>\operatorname{AdaIN(\mathbf{x}_i, \mathbf{y}) = \mathbf{y}_{s,i}\frac{\mathbf{x}_i - \mu(\mathbf{x}_i)}{\sigma(\mathbf{x}_i)} + \mathbf{y}_{b,i}</math>


==Results==
==Results==