Machine Learning: Difference between revisions

Line 40: Line 40:
Then for any sample S, the corresponding matrix <math>\mathbf{K}</math> where <math>K_{ij} = K(x^{(i)},x^{(j)}</math> is symmetric positive definite.
Then for any sample S, the corresponding matrix <math>\mathbf{K}</math> where <math>K_{ij} = K(x^{(i)},x^{(j)}</math> is symmetric positive definite.
{{hidden | Proof |
{{hidden | Proof |
Symmetry: <math>K_{ij} = K(x^{(i)},x^{(j)} = \phi(x^{(i)})^T\phi(x^{(j)}) = \phi(x^{(j)})^T\phi(x^{(i)}) = K_{ji}</math><br>
Symmetry: <math>K_{ij} = K(\mathbf{x}^{(i)},\mathbf{x}^{(j)} = \phi(\mathbf{x}^{(i)})^T\phi(\mathbf{x}^{(j)}) = \phi(\mathbf{x}^{(j)})^T\phi(\mathbf{x}^{(i)}) = K_{ji}</math><br>
Positive Definite:<br>
Positive Definite:<br>
Let <math>\mathbf{v} \in \mathbb{R}^n</math>.<br>
Let <math>\mathbf{v} \in \mathbb{R}^n</math>.<br>
Line 46: Line 46:
<math>
<math>
\begin{aligned}
\begin{aligned}
\mathbf{v}^T K v
\mathbf{v}^T \mathbf{K} \mathbf{v}
&= v^T [\sum_j K_{ij}v_j]\\
&= v^T [\sum_j K_{ij}v_j]\\
&= \sum_i \sum_j v_{i}K_{ij}v_{j}\\
&= \sum_i \sum_j v_{i}K_{ij}v_{j}\\
&= \sum_i \sum_j v_{i}\phi(x^{(i)})^T\phi(x^{(j)})v_{j}\\
&= \sum_i \sum_j v_{i}\phi(\mathbf{x}^{(i)})^T\phi(\mathbf{x}^{(j)})v_{j}\\
&= \sum_i \sum_j v_{i} \sum_k \phi_k(x^{(i)}) \phi_k(x^{(j)})v_{j}\\
&= \sum_i \sum_j v_{i} \sum_k \phi_k(\mathbf{x}^{(i)}) \phi_k(\mathbf{x}^{(j)})v_{j}\\
&= \sum_k \sum_i \sum_j v_{i} \phi_k(x^{(i)}) \phi_k(x^{(j)})v_{j}\\
&= \sum_k \sum_i \sum_j v_{i} \phi_k(\mathbf{x}^{(i)}) \phi_k(\mathbf{x}^{(j)})v_{j}\\
&= \sum_k \sum_i  v_{i} \phi_k(x^{(i)}) \sum_j \phi_k(x^{(j)})v_{j}\\
&= \sum_k \sum_i  v_{i} \phi_k(\mathbf{x}^{(i)}) \sum_j \phi_k(\mathbf{x}^{(j)})v_{j}\\
&= \sum_k (\sum_i  v_{i} \phi_k(x^{(i)}))^2\\
&= \sum_k (\sum_i  v_{i} \phi_k(\mathbf{x}^{(i)}))^2\\
&\geq 0
&\geq 0
\end{aligned}
\end{aligned}