SinGAN: Learning a Generative Model from a Single Natural Image: Difference between revisions

no edit summary
m (David moved page SinGan to SinGAN)
No edit summary
Line 4: Line 4:
[https://github.com/tamarott/SinGAN Github Official PyTorch Implementation]<br>
[https://github.com/tamarott/SinGAN Github Official PyTorch Implementation]<br>
SinGAN: Learning a Generative Model from a Single Natural Image<br>
SinGAN: Learning a Generative Model from a Single Natural Image<br>
==Basic Idea==
Bootstrap patches of the original image and build GANs which can add fine details to blurry patches at different path sizes.
* Start by building a GAN to generate low-resolution versions of the original image
* Then upscale the image and build a GAN to add details to patches of your upscaled image
* Fix the parameters of the previous GAN. Upscale the outputs and repeat.
==Architecture==
They build <math>N</math> GANs.<br>
Each GAN <math>G_n</math> adds details to patches of the image produced by GAN <math>G_{n+1}</math> below it.<br>
The final GAN <math>G_0</math> adds only fine details.
===Generator===
===Discriminator===
==Training and Loss Function==
They use a combination of the standard GAN adversarial loss and a reconstruction loss.
===Reconstruction Loss===
<math>L_{rec} = \Vert G_n(0,(\bar{x}^{rec}_{n+1}\uparrow^r) - x_n \Vert ^2</math>
The reconstruction loss ensures that the original image can be built by the GAN.
Rather than inputting noise to the generators, they input
<math>\{z_N^{rec}, z_{N-1}^{rec}, ..., z_0^{rec}\} = \{z^*, 0, ..., 0\}</math>
where the initial noise <math>z^*</math> is drawn once and then fixed during the rest of the training.