StyleGAN: Difference between revisions
| (2 intermediate revisions by the same user not shown) | |||
| Line 12: | Line 12: | ||
===Mapping Network=== | ===Mapping Network=== | ||
The goal of the mapping network is to generate | The goal of the mapping network is to generate a latent vector <math>w</math>.<br> | ||
This | This latent <math>w</math> is used by the synthesis network as input the each AdaIN block.<br> | ||
Before each AdaIN block, a learned affine transformation converts <math>w</math> into a "style" in the form of mean and standard deviation.<br> | |||
The mapping network <math>f</math> consists of 8 fully connected layers with leaky relu activations at each layer.<br> | The mapping network <math>f</math> consists of 8 fully connected layers with leaky relu activations at each layer.<br> | ||
The input and output of this vector is an array of size 512.<br> | The input and output of this vector is an array of size 512.<br> | ||
| Line 27: | Line 28: | ||
An affine layer (fully connected with no activation, block A in the figure) converts the style to a mean <math>y_{b,i}</math> and standard deviation <math>y_{s,i}</math>.<br> | An affine layer (fully connected with no activation, block A in the figure) converts the style to a mean <math>y_{b,i}</math> and standard deviation <math>y_{s,i}</math>.<br> | ||
Then the feature map is shifted and scaled to have this mean and standard deviation.<br> | Then the feature map is shifted and scaled to have this mean and standard deviation.<br> | ||
* <math>\operatorname{AdaIN(\mathbf{x}_i, \mathbf{y}) = \mathbf{y}_{s,i}\frac{\mathbf{x}_i - \mu(\mathbf{x}_i)}{\sigma(\mathbf{x}_i)} + \mathbf{y}_{b,i}</math> | * <math>\operatorname{AdaIN}(\mathbf{x}_i, \mathbf{y}) = \mathbf{y}_{s,i}\frac{\mathbf{x}_i - \mu(\mathbf{x}_i)}{\sigma(\mathbf{x}_i)} + \mathbf{y}_{b,i}</math> | ||
==Results== | ==Results== | ||