Jump to content

Generative adversarial network: Difference between revisions

Line 41: Line 41:
[https://arxiv.org/abs/1707.04993 Paper]<br>
[https://arxiv.org/abs/1707.04993 Paper]<br>
MoCoGAN: Decomposing Motion and Content for Video Generation<br>
MoCoGAN: Decomposing Motion and Content for Video Generation<br>
===Video Prediction===
* [http://openaccess.thecvf.com/content_iccv_2017/html/Liang_Dual_Motion_GAN_ICCV_2017_paper.html Dual Motion GAN]
** Have a frame generator and a motion generator
** Combine the outputs of both generators using a fusing layer
** Trained using a frame discriminator and a motion discriminator. (Each generator are trained with both discriminators)


==Resources==
==Resources==
* [https://github.com/soumith/ganhacks Tricks for Training GANs]
* [https://github.com/soumith/ganhacks Tricks for Training GANs]