Deep Learning: Difference between revisions

Line 936: Line 936:


==Variational Autoencoders (VAE)==
==Variational Autoencoders (VAE)==
Deep generative models: 
Given training data <math>\{x_i\}_{i=1}^{n}</math>.
The goal is to ''generate'' realistic fake samples.
Given a good generative model, we can do denoising, inpainting, domain transfer, etc.
Probabilistic Model:
suppose our dataset is <math>\{x_i\}_{1}^{n}</math> with <math>x_i \in \mathbb{R}^d</math>
# Generate latent variables <math>z_1,...,z_n \in \mathbb{R}^r</math> where <math>r << d</math>.
# Assume <math>X=x_i | Z = z_i \sim N \left( g_{theta}(z_i), \sigma^2 I \right)</math>


==Misc==
==Misc==