SinGAN: Learning a Generative Model from a Single Natural Image: Difference between revisions

Line 33: Line 33:
Each generator consists of 5 convolutional blocks:<br>
Each generator consists of 5 convolutional blocks:<br>
Conv(<math>3 \times 3</math>)-BatchNorm-LeakyReLU.<br>
Conv(<math>3 \times 3</math>)-BatchNorm-LeakyReLU.<br>
Note: This generator is similar to pix2pix.<br>
They use 32 kernels per block at the coarsest scale and increase <math>2 \times</math> every 4 scales.<br>
They use 32 kernels per block at the coarsest scale and increase <math>2 \times</math> every 4 scales.<br>


Line 39: Line 40:
* Internal Covariate Shift - the change in distribution of network activations as network parameters change.
* Internal Covariate Shift - the change in distribution of network activations as network parameters change.
* Whitening
* Whitening
====Leaky Relu====
Relu is
<math>\begin{cases}
x & \text{if }x > 0\\
0 & \text{if }x <= 0
\end{cases}
</math>. <br>
If the input is <math><=0</math> then any gradient through that neuron will always be 0.<br>
This leads to dead neurons which remain dead if the neurons below never output a positive number.<br>
That is, you get neurons which always output <math>0</math> throughout the training process.<br>
Leaky relu: <math>
\begin{cases}
x & \text{if }x > 0\\
0.01x & \text{if }x <= 0
\end{cases}
</math>
always has a gradient so neurons below will always be updated.


===Discriminator===
===Discriminator===