5,337
edits
No edit summary |
|||
(30 intermediate revisions by the same user not shown) | |||
Line 23: | Line 23: | ||
;Notes | ;Notes | ||
* If our model is <math>g(\theta^Tx^{(i)})</math> where <math>g(x)</math> is the sigmoid function <math>\frac{e^x}{1+e^x}</math> then this is convex | * If our model is <math>g(\theta^Tx^{(i)})</math> where <math>g(x)</math> is the sigmoid function <math>\frac{e^x}{1+e^x}</math> then this is convex | ||
{{hidden | Proof | | {{hidden | Proof | | ||
We show that the Hessian is positive semi definite.<br> | |||
<math> | |||
\nabla_\theta J(\theta) = -\nabla_\theta \sum [(y^{(i)})\log(g(\theta^t x^{(i)})) + (1-y^{(i)})\log(1-g(\theta^t x^{(i)}))] | |||
</math><br> | |||
<math> | |||
= -\sum [(y^{(i)})\frac{g(\theta^t x^{(i)})(1-g(\theta^t x^{(i)}))}{g(\theta^t x^{(i)})}x^{(i)} + (1-y^{(i)})\frac{-g(\theta^t x^{(i)})(1-g(\theta^t x^{(i)}))}{1-g(\theta^t x^{(i)})}x^{(i)}] | |||
</math><br> | |||
<math> | <math> | ||
= -\sum [(y^{(i)})(1-g(\theta^t x^{(i)}))x^{(i)} - (1-y^{(i)})g(\theta^t x^{(i)})x^{(i)}] | |||
</math><br> | |||
<math> | |||
= -\sum [(y^{(i)})x^{(i)} -(y^{(i)}) g(\theta^t x^{(i)}))x^{(i)} - g(\theta^t x^{(i)})x^{(i)} + y^{(i)}g(\theta^t x^{(i)})x^{(i)}] | |||
</math><br> | |||
<math> | |||
\implies \nabla^2_\theta J(\theta) | = -\sum [(y^{(i)})x^{(i)} - g(\theta^t x^{(i)})x^{(i)}] | ||
</math><br> | |||
<math> | |||
\implies \nabla^2_\theta J(\theta) = \nabla_\theta -\sum [(y^{(i)})x^{(i)} - g(\theta^t x^{(i)})x^{(i)}] | |||
</math><br> | |||
<math> | |||
= \sum_i g(\theta^t x^{(i)})(1-g(\theta^t x^{(i)})) x^{(i)} (x^{(i)})^T | |||
</math><br> | </math><br> | ||
which is a PSD matrix | which is a PSD matrix | ||
Line 135: | Line 147: | ||
Positive Definite:<br> | Positive Definite:<br> | ||
Let <math>\mathbf{v} \in \mathbb{R}^n</math>.<br> | Let <math>\mathbf{v} \in \mathbb{R}^n</math>.<br> | ||
Then | Then <br> | ||
<math> | <math> | ||
\mathbf{v}^T \mathbf{K} \mathbf{v}= \mathbf{v}^T [\sum_j K_{ij}v_j] | |||
\mathbf{v}^T \mathbf{K} \mathbf{v} | = \sum_i \sum_j v_{i}K_{ij}v_{j} | ||
= \sum_i \sum_j v_{i}\phi(\mathbf{x}^{(i)})^T\phi(\mathbf{x}^{(j)})v_{j} | |||
= \sum_i \sum_j v_{i} \sum_k \phi_k(\mathbf{x}^{(i)}) \phi_k(\mathbf{x}^{(j)})v_{j} | |||
= \sum_k \sum_i \sum_j v_{i} \phi_k(\mathbf{x}^{(i)}) \phi_k(\mathbf{x}^{(j)})v_{j} | |||
= \sum_k \sum_i v_{i} \phi_k(\mathbf{x}^{(i)}) \sum_j \phi_k(\mathbf{x}^{(j)})v_{j} | |||
= \sum_k (\sum_i v_{i} \phi_k(\mathbf{x}^{(i)}))^2 | |||
\geq 0 | |||
</math> | </math> | ||
}} | }} | ||
====Common Kernels==== | |||
; Polynomial Kernel | |||
* See [[wikipedia:Polynomial kernel]] | |||
* <math>K(x,z) = (c+x^Tz)^d</math> | |||
* For <math>d=2</math> | |||
** we have <math>(1+x^Tz)^2 = 1 + 2(x^Tz) + (x^Tz)^2</math> | |||
** <math>= 1 + 2 \sum x_i z_i + (\sum x_i z_i)(\sum x_j z_j)</math> | |||
** <math>= 1 + 2 \sum x_i z_i + 2\sum_{i < j} (x_i x_j) (z_i z_j) + \sum (x_i^2)(z_i)^2</math> | |||
** <math>= 1 + \sum (\sqrt{2}x_i) (\sqrt{2}z_i) + \sum_{i < j} (\sqrt{2} x_i x_j)(\sqrt{2} z_i z_j) + \sum x_i^2 z_i^2</math> | |||
* The dimension of the feature map associated with this kernel is exponential in d | |||
** There are <math>1+n+\binom{n}{2} + ... + \binom{n}{d} = O(n^d)</math> terms | |||
==Learning Theory== | ==Learning Theory== | ||
Line 196: | Line 217: | ||
</math> | </math> | ||
: for some constants <math>K_1, K_2</math> | : for some constants <math>K_1, K_2</math> | ||
====Growth Function==== | |||
The growth function is maximum number of ways <math>m</math> examples can be labelled using hypotheses from <math>\mathcal{H}</math> | |||
* <math>\tau_H(m) = \max_{|C| = m} |H_C|</math> | |||
;Notes | |||
* If <math>m \leq VCdim(H)</math>, then <math>\tau_H(m) = 2^m</math> | |||
====Sauer's Lemma==== | |||
[https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf Reference]<br> | |||
After the VCdim, the growth function grows as a polynomial | |||
* If <math>VCdim(H)\leq d \leq \infty</math> then <math>\tau_H(m) \leq \sum_{i=0}^{d} \binom{n}{i}</math> | |||
* Also if <math>m > d+1</math> then <math>\tau_H(m) \leq \left(\frac{em}{d}\right)^d</math>. | |||
===Bias-Variance Tradeoff=== | ===Bias-Variance Tradeoff=== |