Unsupervised Learning: Difference between revisions

no edit summary
No edit summary
Line 228: Line 228:
;Notes
;Notes
* Also known as Wasserstein metric
* Also known as Wasserstein metric
==Dimension Reduction==
Goal: Reduce the dimension of a dataset.<br>
If each example <math>x \in \mathbb{R}^n</math>, we want to reduce each example to be in <math>\mathbb{R}^r<math> where <math>r < n</math>
===PCA===
Principal Component Analysis<br>
Preprocessing: Subtract the sample mean from each example so that the new sample mean is 0.<br>
Goal: Find a vector <math>v_1</math> such that the projection <math>v_1 \cdot x</math> has maximum variance.<br>
These principal components are the eigenvectors of <math>X^TX</math>.<br>