StyleGAN: Difference between revisions
(6 intermediate revisions by the same user not shown) | |||
Line 12: | Line 12: | ||
===Mapping Network=== | ===Mapping Network=== | ||
The mapping network <math>f</math> consists of 8 fully connected layers with leaky relu activations at each layer. | The goal of the mapping network is to generate a latent vector <math>w</math>.<br> | ||
This latent <math>w</math> is used by the synthesis network as input the each AdaIN block.<br> | |||
Before each AdaIN block, a learned affine transformation converts <math>w</math> into a "style" in the form of mean and standard deviation.<br> | |||
The mapping network <math>f</math> consists of 8 fully connected layers with leaky relu activations at each layer.<br> | |||
The input and output of this vector is an array of size 512.<br> | |||
===Synthesis Network=== | ===Synthesis Network=== | ||
Line 18: | Line 22: | ||
It consists of 9 convolution blocks, one for each resolution from <math>4^2</math> to <math>1024^2</math>.<br> | It consists of 9 convolution blocks, one for each resolution from <math>4^2</math> to <math>1024^2</math>.<br> | ||
Each block consists of upsample, 3x3 convolution, AdaIN, 3x3 convolution, AdaIN. | Each block consists of upsample, 3x3 convolution, AdaIN, 3x3 convolution, AdaIN. | ||
After each convolution layer, a gaussian noise with learned variance is added to the feature maps. | After each convolution layer, a gaussian noise with learned variance (block B in the figure) is added to the feature maps. | ||
====Adaptive Instance Normalization==== | ====Adaptive Instance Normalization==== | ||
Each AdaIN block takes as input the latent style <math>w</math> and the feature map <math>x</math>.<br> | |||
An affine layer (fully connected with no activation, block A in the figure) converts the style to a mean <math>y_{b,i}</math> and standard deviation <math>y_{s,i}</math>.<br> | |||
Then the feature map is shifted and scaled to have this mean and standard deviation.<br> | |||
* <math>\operatorname{AdaIN}(\mathbf{x}_i, \mathbf{y}) = \mathbf{y}_{s,i}\frac{\mathbf{x}_i - \mu(\mathbf{x}_i)}{\sigma(\mathbf{x}_i)} + \mathbf{y}_{b,i}</math> | |||
==Results== | ==Results== | ||
Line 29: | Line 37: | ||
==Related== | ==Related== | ||
* [https://arxiv.org/abs/1904.03189 Image2StyleGAN] | * [https://arxiv.org/abs/1904.03189 Image2StyleGAN] | ||
==Resources== | |||
* [https://machinelearningmastery.com/introduction-to-style-generative-adversarial-network-stylegan/ https://machinelearningmastery.com/introduction-to-style-generative-adversarial-network-stylegan/] | |||
* [https://towardsdatascience.com/explained-a-style-based-generator-architecture-for-gans-generating-and-tuning-realistic-6cb2be0f431 https://towardsdatascience.com/explained-a-style-based-generator-architecture-for-gans-generating-and-tuning-realistic-6cb2be0f431] |