5,321
edits
(2 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
==Convergence Rates== | |||
[https://en.wikipedia.org/wiki/Rate_of_convergence Wikipedia]<br> | |||
The rate of convergence is <math>\lim_{k \rightarrow \infty} \frac{|x_{k+1}-x^*|}{|x_{k}-x^*|^q}=L</math><br> | |||
Iterative methods have the following convergence rates: | |||
* If Q=1 and L=1 we have sublinear convergence. | |||
* If Q=1 and <math>L\in(0,1)</math> we have linear convergence. | |||
* If Q=1 and <math>L=0</math> we have superlinear convergence. | |||
* If Q=2 and <math>L \leq \infty</math> we have quadratic convergence. | |||
==Line Search Methods== | ==Line Search Methods== | ||
Line 41: | Line 49: | ||
==Conjugate Gradient Methods== | ==Conjugate Gradient Methods== | ||
The goal is to solve <math>Ax=b</math> or equivalently <math>\min \phi(x)</math> where <math>\phi(x)=(1/2)x^T A x - b^Tx</math> | The goal is to solve <math>Ax=b</math> or equivalently <math>\min \phi(x)</math> where <math>\phi(x)=(1/2)x^T A x - b^Tx</math><br>. | ||
The practical CG algorithm will converge in at most <math>n</math> steps. | |||
===Definitions=== | ===Definitions=== | ||
Vectors <math>\{p_i\}</math> are conjugate w.r.t SPD matrix <math>A</math> if <math>p_i^T A p_j = 0</math> for <math>i \neq j</math> | Vectors <math>\{p_i\}</math> are conjugate w.r.t SPD matrix <math>A</math> if <math>p_i^T A p_j = 0</math> for <math>i \neq j</math> |