Preconditioned conjugate gradient method: Difference between revisions

Content deleted Content added
SmackBot (talk | contribs)
m Date/fix the maintenance tags or gen fixes
RileyBot (talk | contribs)
m Bot: Tagging redirect with Template:R to section) (Task 17
 
(13 intermediate revisions by 11 users not shown)
Line 1:
#REDIRECT [[Conjugate gradient method#The preconditioned conjugate gradient method]]
{{Merge|Preconditioner|date=May 2007}}
{{R to section}}
 
The [[conjugate gradient method]] is a [[numerical analysis|numerical algorithm]] that solves a [[system of linear equations]]
 
:<math>A x= b.\,</math>
 
where <math>A</math> is symmetric [positive definite]. If the [[matrix]] <math>A</math> is [[ill-conditioned]], i.e. it has a large [[condition number]] <math>\kappa(A)</math>, it is often useful to use a [[preconditioner|preconditioning matrix]] <math>P^{-1}</math> that is chosen such that <math>P^{-1} \approx A^{-1}</math> and solve the system
 
:<math> P^{-1}Ax = P^{-1}b,\,</math>
 
instead.
 
The simplest preconditioner is a diagonal matrix that has just the diagonal elements of of <math>A</math>. This is known as Jacobi preconditioning or diagonal scaling. Since diagonal matrices are trivial to invert and store in memory, a diagonal preconditioner is a good starting point. More sophisticated choices must trade-off the reduction in <math>\kappa(A)</math>, and hence faster convergence, with the time spent computing <math>P^{-1}</math>.
 
==External links==
* [http://www.math-linux.com/spip.php?article55 Preconditioned Conjugate Gradient] – math-linux.com
* [http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf An Introduction to the Conjugate Gradient Method Without the Agonizing Pain] by Jonathan Richard Shewchuck
 
[[Category:Numerical linear algebra]]
 
{{math-stub}}