SinGAN: Learning a Generative Model from a Single Natural Image: Difference between revisions

From David's Wiki
Line 35: Line 35:
Note: This generator is similar to pix2pix.<br>
Note: This generator is similar to pix2pix.<br>
They use 32 kernels per block at the coarsest scale and increase <math>2 \times</math> every 4 scales.<br>
They use 32 kernels per block at the coarsest scale and increase <math>2 \times</math> every 4 scales.<br>
This means their convolutional layers have an input and output of 32 channels.


====[https://arxiv.org/pdf/1502.03167.pdf Batch Normalization]====
====[https://arxiv.org/pdf/1502.03167.pdf Batch Normalization]====

Revision as of 15:32, 14 January 2020

SinGAN
Paper
Author's Website
Supplementary Material Mirror(CVF Host)
ICCV
Github Official PyTorch Implementation
SinGAN: Learning a Generative Model from a Single Natural Image
Authors: Tamar Rott Shaham (Technion), Tali Dekel (Google Research), Tomer Michaeli (Technion)


Basic Idea

Train GANs to fill in details at different scales of the image

  • Start by building a GAN to generate low-resolution versions of the original image
  • Then upscale the image and build a GAN to add details to your upscaled image
  • Fix the parameters of the previous GAN. Upscale the outputs and repeat.

Architecture

They build \(\displaystyle N\) GANs, N usually 7-8
Each GAN \(\displaystyle G_n\) adds details to the image produced by GAN \(\displaystyle G_{n+1}\) below it.
The final GAN \(\displaystyle G_0\) adds only fine details.

Generator

The use N generators which they call a hierarchy of patch-GANs.
Each generator consists of 5 convolutional blocks:
Conv(\(\displaystyle 3 \times 3\))-BatchNorm-LeakyReLU.
Note: This generator is similar to pix2pix.
They use 32 kernels per block at the coarsest scale and increase \(\displaystyle 2 \times\) every 4 scales.
This means their convolutional layers have an input and output of 32 channels.

Batch Normalization

Definitions
  • Internal Covariate Shift - the change in distribution of network activations as network parameters change.
  • Whitening

Leaky Relu

Relu is \(\displaystyle \begin{cases} x & \text{if }x \gt 0\\ 0 & \text{if }x \lt = 0 \end{cases} \).
If the input is \(\displaystyle \lt =0\) then any gradient through that neuron will always be 0.
This leads to dead neurons which remain dead if the neurons below never output a positive number.
That is, you get neurons which always output \(\displaystyle 0\) throughout the training process.
Leaky relu: \(\displaystyle \begin{cases} x & \text{if }x \gt 0\\ 0.01x & \text{if }x \lt = 0 \end{cases} \) always has a gradient so neurons below will always be updated.

Discriminator

The architecture is the same as the generator.
The patch size is \(\displaystyle 11 \times 11\)

Training and Loss Function

\(\displaystyle \min_{G_n} \max_{D_n} \mathcal{L}_{adv}(G_n, D_n) + \alpha \mathcal{L}_{rec}(G_n)\)
They use a combination of the standard GAN adversarial loss and a reconstruction loss.

Adversarial Loss

They use the WGAN-GP loss.
This drops the log from the traditional cross-entropy loss.
\(\displaystyle \min_{G_n}\max_{D_n}L_{adv}(G_n, D_n)+\alpha L_{rec}(G_n)\)

# When training the discriminator
netD.zero_grad()
output = netD(real).to(opt.device)
#D_real_map = output.detach()
errD_real = -output.mean()#-a
errD_real.backward(retain_graph=True)
#... Make noise and prev ...
fake = netG(noise.detach(),prev)
output = netD(fake.detach())
errD_fake = output.mean()
errD_fake.backward(retain_graph=True)

# When training the generator
output = netD(fake)
errG = -output.mean()
errG.backward(retain_graph=True)

Reconstruction Loss

\(\displaystyle \mathcal{L}_{rec} = \Vert G_n(0,(\bar{x}^{rec}_{n+1}\uparrow^r) - x_n \Vert ^2\)
The reconstruction loss ensures that the original image can be built by the GAN.
Rather than inputting noise to the generators, they input \(\displaystyle \{z_N^{rec}, z_{N-1}^{rec}, ..., z_0^{rec}\} = \{z^*, 0, ..., 0\}\) where the initial noise \(\displaystyle z^*\) is drawn once and then fixed during the rest of the training.
The standard deviation \(\displaystyle \sigma_n\) of the noise \(\displaystyle z_n\) is proportional to the root mean squared error (RMSE) between the reconstructed patch and the original patch.

loss = nn.MSELoss()
Z_opt = opt.noise_amp*z_opt+z_prev
rec_loss = alpha*loss(netG(Z_opt.detach(),z_prev),real)
rec_loss.backward(retain_graph=True)

Evaluation

They evaluate their method using an Amazon Mechanical Turk (AMT) user study and using Single Image Frechet Inception Distance

Amazon Mechanical Turk Study

Frechet Inception Distance

Results

Below are images of their results from their paper and website.


Applications

The following are applications they identify.
The basic idea for each of these applications is to start your input at an intermediate GAN rather than the bottom GAN.
While the bottom layer is a purely unconditional GAN, the intermediate generators are more akin to conditional GANs.

Super-Resolution

Upscaling

Paint-to-Image

Convert a drawing to an image.

Harmonization

Harmonize, or blend the style of a cut-and-pasted piece of image.

Editing

Single Image Animation

Generate a video from a single image.
In SinGAN, they perform a random walk in the noise passed as inputs to the upper levels of GANs.

Repo

The official repo for SinGAN can be found on their Github Repo

Citation

Here is the bibtex:

@InProceedings{Shaham_2019_ICCV,
author = {Shaham, Tamar Rott and Dekel, Tali and Michaeli, Tomer},
title = {SinGAN: Learning a Generative Model From a Single Natural Image},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}