Basics of Unsupervised Learning
Given points we wish to group them into clusters 1,...,k.
Usually we are given the number of clusters, k.
The problem is to assign labels where
Here we wish to cluster them to minimize the distance to the mean of their cluster.
In our formulation we have k clusters.
The mean of each cluster is called the centroid.</math>
Let denote the centroids and let denote the cluster labels for our data.
Our loss function is .
- Using coordinate-block descent
If we fix then we have only need to minimize .
Since each term of the sum is independent, we simply choose the closest centroid for each point
If we fix then we need to minimize wrt .
Taking the gradient and setting it to 0 we get:
This procedure will yield us a sequence of parameters and losses.
- Since the loss is monotone decreasing and bounded below, it will converge by the monotone convergence theorem.
- However, this does not imply that the parameters and will converge.
- Randomly initialize labels .
- Then calculate the centroids .
- Then update the labels for each example to the closest centroid.
- Update the centroids by taking the mean of every point in the cluster.
- Repeat steps 3 and 4
We will develop a model for how our data is generated.
We will then find probabilities for each element being from a cluster (ala Bayesian paradigm).
Given clusters, the probability of a point being from cluster k is
We will assume each cluster is from a normal distribution
The key idea is to introduce intermediate coefficients to apply Jensen's inequality.
Let denote our parameters.
The likelihood is where is our pdf/pmf.
Then the log-likelihood is
By introducing an extra variable we'll be able to apply Jensen's inequality to the concave log function.
Assume is a probability mass function.
The EM algorithm is an iterative algorithm which alternates between an E-Step and an M-Step.
We will fix and maximize J wrt .
Jensen's inequality holds with equality iff either the function is linear or if the random variable is degenerate.
We will assume is a constant.
This implies .
Since Q is a pmf, we have
Q is updated with
Maximization w.r.t Q
We are assuming that Q is a pmf so
Our lagrangian is:
We can maximize each independently.
Taking the derivative wrt Q we get:
Since Q is a pmf, we know it sums to 1 so we get the same result replacing with
We will fix and maximize J wrt
Assume for simplicity.
Maximizing wrt , we get .
Maximizing wrt we get
Maximization wrt phi
We want to maximize subject to the condition since is a pmf.
The lagrangian for this is
This can be rewritten as
The dual of this problem is
Taking the gradient w.r.t , we get
Plugging this into our dual problem we get:
Taking the derivative with respect to , we get:
Plugging in into our equation for we get
Goal: Generate realistic but fake samples.
Applications: Denoising, impainting
- KL is always >= 0
- KL is not symmetric
- Jensen-Shannon Divergence
- This is symmetric
Our model for how the data is generated is as follows:
- Generate latent variables iid where dimension r is less than n.
- We assume
- Generate where
- For some function parameterized by
The variational bound is:
We know from Baye's rule that .
Plugging this into the equation for yields our inequality.
Rearranging terms we get:
Since the KL divergence is greater than or equal to 0, our variational bound follows.
The main idea is to ensure the that discriminator is lipschitz continuous and to limit the lipschitz constant (i.e. the derivative) of the discriminator.
If the correct answer is 1.0 and the generator produces 1.0001, we don't want the discriminator to give us a very high loss.
Earth mover's distance
The minimum cost of converting one pile of dirt to another.
Where cost is the cost of moving (amount * distance)
Given a set with m clusters and a set with n clusters:
- Also known as Wasserstein metric
Goal: Reduce the dimension of a dataset.
If each example , we want to reduce each example to be in where
Principal Component Analysis
Preprocessing: Subtract the sample mean from each example so that the new sample mean is 0.
Goal: Find a vector such that the projection has maximum variance.
Result: These principal components are the eigenvectors of .
Idea: Maximize the variance of the projections.
Note that .
You have a encoder and a decoder which are both neural networks.