Generative adversarial network: Difference between revisions

From David's Wiki
Line 51: Line 51:
* [https://arxiv.org/pdf/1912.10653.pdf Compression via colorization]
* [https://arxiv.org/pdf/1912.10653.pdf Compression via colorization]
** Colorize with GAN. Only transmit luminance (Y of YUV)
** Colorize with GAN. Only transmit luminance (Y of YUV)
** The paper doesn't say how much bitrate is reduced by removing UV. I suspect it isn't much due to chroma subsampling.
** The paper claims 72% BDBR reduction


==Resources==
==Resources==
* [https://github.com/soumith/ganhacks Tricks for Training GANs]
* [https://github.com/soumith/ganhacks Tricks for Training GANs]

Revision as of 16:38, 7 January 2020

GANs are generative adversarial networks. They were developed by Ian Goodfellow.
Goal: Learn to generate examples from the same distribution as your training set.

Basis Structure

GANs consist of a generator and a discriminator.

For iteration i
  For iteration j
    Update Discriminator
  Update Generator

Variations

Conditional GAN

Paper
Feed data y to both generator and discriminator

Wasserstein GAN

Paper
Medium post
This new WGAN-GP loss function improves the stability of training.
Normally, the discriminator is trained with a cross-entropy with sigmoid loss function.
The WGAN proposes using Wasserstein distance which is implemented by removing the cross-entropy+sigmoid and clipping (clamp) the weights on the discriminator to a range \(\displaystyle [-c, c]\).
However, weight clipping leads to other issues which limit the critic.
Instead of clipping, WGAN-GP proposes gradient penalty to enforce 1-Lipschitz .

Applications

CycleGan

InfoGAN

SinGAN

Paper
Website
Github Official PyTorch Implementation
SinGAN: Learning a Generative Model from a Single Natural Image

MoCoGAN

Paper
MoCoGAN: Decomposing Motion and Content for Video Generation

Video Prediction

  • Dual Motion GAN (Liang et al. 2017)
    • Have a frame generator and a motion generator
    • Combine the outputs of both generators using a fusing layer
    • Trained using a frame discriminator and a motion discriminator. (Each generator are trained with both discriminators)

Video Compression

Resources