Jump to content

Numerical Optimization: Difference between revisions

Line 61: Line 61:
* Find a conjugate direction <math>p_k</math>
* Find a conjugate direction <math>p_k</math>
* Take the step size <math>\alpha_k</math> which minimizes along <math>\phi(x)</math>
* Take the step size <math>\alpha_k</math> which minimizes along <math>\phi(x)</math>
Practical CG method:<br>
<pre>
</pre>
===Theorems===
Given a set of conjugate directions <math>\{p_0,...,p_{n-1}\}</math> we can generate a sequence of <math>x_k</math> with<br>
<math> x_{k+1}=x_k + \alpha_k p_k</math> where <math>\alpha_k = -\frac{r_k^T p_k}{p_k^T A p_k}</math> minimizes <math>\phi(x)</math><br>
; Theorem: For an <math>x_0</math>, the sequence <math>x_k</math> converges to the solution <math>x^*</math> in at most n steps.
; Theorem: At step k, the residual <math>r_k</math> is orthogonal to <math>\{p_0,...p_{k-1}\}</math> and the current iteration <math>x_{k}</math>
: minimizes <math>\phi(x)</math> over the set <math>\{x_0 + span(p_0,...,p_{k-1})\}</math>


==Resources==
==Resources==
* [https://link.springer.com/book/10.1007%2F978-0-387-40065-5 Numerical Optimization by Nocedal and Wright (2006)]
* [https://link.springer.com/book/10.1007%2F978-0-387-40065-5 Numerical Optimization by Nocedal and Wright (2006)]