|
|
Line 1: |
Line 1: |
| | Machine Learning |
| | |
| Machine Learning | | Machine Learning |
|
| |
|
Line 17: |
Line 19: |
| so the hessian is positive semi-definite | | so the hessian is positive semi-definite |
| }} | | }} |
|
| |
| ===Cross Entropy===
| |
| The cross entropy loss is
| |
| * <math>J(\theta) = \sum [(y^{(i)})\log(h_\theta(x)) + (1-y^{(i)})\log(1-h_\theta(x))]</math>
| |
| ;Notes
| |
| * If our model is <math>g(\theta^Tx^{(i)})</math> where <math>g(x)</math> is the sigmoid function <math>\frac{e^x}{1+e^x}</math> then this is convex
| |
| {{hidden | Proof |
| |
| <math>
| |
| \begin{aligned}
| |
| \nabla_\theta J(\theta) &= -\nabla_\theta \sum [(y^{(i)})\log(g(\theta^t x^{(i)})) + (1-y^{(i)})\log(1-g(\theta^t x^{(i)}))]\\
| |
| &= -\sum [(y^{(i)})\frac{g(\theta^t x^{(i)})(1-g(\theta^t x^{(i)}))}{g(\theta^t x^{(i)})}x^{(i)} + (1-y^{(i)})\frac{-g(\theta^t x^{(i)})(1-g(\theta^t x^{(i)}))}{1-g(\theta^t x^{(i)})}x^{(i)}]\\
| |
| &= -\sum [(y^{(i)})(1-g(\theta^t x^{(i)}))x^{(i)} - (1-y^{(i)})g(\theta^t x^{(i)})x^{(i)}]\\
| |
| &= -\sum [(y^{(i)})x^{(i)} -(y^{(i)}) g(\theta^t x^{(i)}))x^{(i)} - g(\theta^t x^{(i)})x^{(i)} + y^{(i)}g(\theta^t x^{(i)})x^{(i)}]\\
| |
| &= -\sum [(y^{(i)})x^{(i)} - g(\theta^t x^{(i)})x^{(i)}]\\
| |
| \implies \nabla^2_\theta J(\theta) &= \nabla_{\theta} -\sum [(y^{(i)})x^{(i)} - g(\theta^t x^{(i)})x^{(i)}]\\
| |
| &= \sum [(g(\theta^t x^{(i)}))(1-g(\theta^t x^{(i)})) x^{(i)} (x^{(i)})^T]
| |
| \end{aligned}
| |
| </math><br>
| |
| which is a PSD matrix
| |
| }}
| |
|
| |
| ===Hinge Loss===
| |
|
| |
|
| ==Optimization== | | ==Optimization== |