Generative adversarial network: Difference between revisions
No edit summary |
|||
Line 37: | Line 37: | ||
[https://arxiv.org/abs/1707.04993 Paper]<br> | [https://arxiv.org/abs/1707.04993 Paper]<br> | ||
MoCoGAN: Decomposing Motion and Content for Video Generation<br> | MoCoGAN: Decomposing Motion and Content for Video Generation<br> | ||
==Resources== | |||
* [https://github.com/soumith/ganhacks Tricks for Training GANs] |
Revision as of 13:18, 19 December 2019
GANs are generative adversarial networks. They were developed by Ian Goodfellow.
Goal: Learn to generate examples from the same distribution as your training set.
Basis Structure
GANs consist of a generator and a discriminator.
For iteration i For iteration j Update Discriminator Update Generator
Variations
Wasserstein GAN
Paper
Medium post
This new WGAN-GP loss function improves the stability of training.
Normally, the discriminator is trained with a cross-entropy with sigmoid loss function.
The WGAN proposes using Wasserstein distance which is implemented by removing the cross-entropy+sigmoid
and clipping (clamp) the weights on the discriminator to a range \(\displaystyle [-c, c]\).
However, weight clipping leads to other issues which limit the critic.
Instead of clipping, WGAN-GP proposes gradient penalty to enforce 1-Lipschitz .
Applications
CycleGan
InfoGAN
SinGAN
Paper
Website
Github Official PyTorch Implementation
SinGAN: Learning a Generative Model from a Single Natural Image
MoCoGAN
Paper
MoCoGAN: Decomposing Motion and Content for Video Generation