SinGAN: Learning a Generative Model from a Single Natural Image: Difference between revisions

No edit summary
Line 36: Line 36:
===Adversarial Loss===
===Adversarial Loss===
They use the [https://arxiv.org/abs/1704.00028 WGAN-GP loss].<br>
They use the [https://arxiv.org/abs/1704.00028 WGAN-GP loss].<br>
This drops the log from the traditional cross-entropy loss.<br>
<math>\min_{G_n}\max_{D_n}L_{adv}(G_n, D_n)+\alpha L_{rec}(G_n)</math><br>
The final loss is the average over all the patches.<br>
The final loss is the average over all the patches.<br>
<syntaxhighlight lang="python">
# When training the discriminator
netD.zero_grad()
output = netD(real).to(opt.device)
#D_real_map = output.detach()
errD_real = -output.mean()#-a
errD_real.backward(retain_graph=True)
#... Make noise and prev ...
fake = netG(noise.detach(),prev)
output = netD(fake.detach())
errD_fake = output.mean()
errD_fake.backward(retain_graph=True)
# When training the generator
output = netD(fake)
errG = -output.mean()
errG.backward(retain_graph=True)
</syntaxhighlight>


===Reconstruction Loss===
===Reconstruction Loss===