\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

StyleGAN CVPR 2019
2018 Paper (arxiv) CVPR 2019 Open Access
StyleGAN Github
StyleGAN2 Paper
StyleGAN2 Github
An architecture by Nvidia which allows controlling the "style" of the GAN output by applying adaptive instance normalization at different layers of the network.
StyleGAN2 improves upon this by...

Architecture

 
Architecture of StyleGAN from their paper

StyleGAN consists of a mapping network \(\displaystyle f\) and a synthesis network \(\displaystyle g\).

Mapping Network

The mapping network \(\displaystyle f\) consists of 8 fully connected layers with leaky relu activations at each layer.

Synthesis Network

The synthesis network is based on progressive growing (ProGAN). It consists of 9 convolution blocks, one for each resolution from \(\displaystyle 4^2\) to \(\displaystyle 1024^2\).
Each block consists of upsample, 3x3 convolution, AdaIN, 3x3 convolution, AdaIN. After each convolution layer, a gaussian noise with learned variance is added to the feature maps.

Adaptive Instance Normalization

Results

StyleGAN2

Related

Resources