Content deleted Content added
-- Draft creation using the WP:Article wizard -- |
Citation bot (talk | contribs) Altered doi-broken-date. | Use this bot. Report bugs. | #UCB_CommandLine |
||
(70 intermediate revisions by 12 users not shown) | |||
Line 1:
{{Short description|Algorithm for solving matrix-vector equations}}
==
A [[system of linear equations]] <math>A{\bold x} = {\bold b}</math> consists of a known [[Matrix (mathematics)|matrix]] <math>A</math> and a known [[Vector (mathematics)|vector]] <math>{\bold b}</math>. To solve the system is to find the value of the unknown vector <math>{\bold x}</math>.<ref name="vorst03" /><ref>{{Citation |title=Matrix Analysis and Applied Linear Algebra |pages=1–40 |access-date=2023-12-18 |archive-url=https://web.archive.org/web/20040610221137/http://www.matrixanalysis.com/Chapter1.pdf |chapter=Linear equations |date=2000 |chapter-url=http://www.matrixanalysis.com/Chapter1.pdf |place= Philadelphia, PA |publisher=SIAM |doi=10.1137/1.9780898719512.ch1 |doi-broken-date=11 July 2025|isbn=978-0-89871-454-8 |archive-date=2004-06-10 }}</ref> A direct method for solving a system of linear equations is to take the inverse of the matrix <math>A</math>, then calculate <math>\bold x = A^{-1}\bold b</math>. However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess <math>\bold x^{(0)}</math>, and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution.<ref>{{cite web|title=Iterative Methods for Linear Systems|publisher=[[Mathworks]]|url=https://au.mathworks.com/help/matlab/math/iterative-methods-for-linear-systems.html}}</ref><ref>{{cite web|title=Iterative Methods for Solving Linear Systems|author=Jean Gallier|publisher=[[UPenn]]|url=https://www.cis.upenn.edu/~cis5150/cis515-12-sl5.pdf}}</ref>
As with the [[conjugate gradient method]], [[biconjugate gradient method]], and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variable [[optimisation problems]], such as [[power-flow study|power-flow analysis]], [[hyperparameter optimisation]], and [[facial recognition system|facial recognition]].<ref>{{cite web|title=Conjugate gradient methods|author1=Alexandra Roberts|author2=Anye Shi|author3=Yue Sun|access-date=2023-12-26|publisher=[[Cornell University]]|url=https://optimization.cbe.cornell.edu/index.php?title=Conjugate_gradient_methods}}</ref>
== Algorithm ==
The algorithm is as follows:<ref>{{cite book|author1=R. Barrett|author2=M. Berry|author3=T. F. Chan|author4=J. Demmel|author5=J. Donato|author6=J. Dongarra|author7=V. Eijkhout|author8=R. Pozo|author9=C. Romine|author10=H. Van der Vorst|title=Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition|publisher=SIAM|year=1994|url=https://netlib.org/linalg/html_templates/Templates.html}}</ref>
# Choose an initial guess <math>{\bold x}^{(0)}</math>
# Compute the residual <math>{\bold r}^{(0)} = {\bold b} - A{\bold x}^{(0)}</math>
# Choose <math>\tilde {\bold r}^{(0)} = {\bold r}^{(0)}</math>
# For <math>i = 1, 2, 3, \dots</math> do:
## <math>\rho^{(i-1)} = \tilde {\bold r}^{T(i-1)}{\bold r}^{(i-1)}</math>
## If <math>\rho^{(i-1)} = 0</math>, the method fails.
## If <math>i=1</math>:
### <math>{\bold p}^{(1)} = {\bold u}^{(1)} = {\bold r}^{(0)}</math>
## Else:
### <math>\beta^{(i-1)} = \rho^{(i-1)}/\rho^{(i-2)}</math>
### <math>{\bold u}^{(i)} = {\bold r}^{(i-1)} + \beta_{i-1}{\bold q}^{(i-1)}</math>
### <math>{\bold p}^{(i)} = {\bold u}^{(i)} + \beta^{(i-1)}({\bold q}^{(i-1)} + \beta^{(i-1)}{\bold p}^{(i-1)})</math>
## Solve <math>M\hat {\bold p}={\bold p}^{(i)}</math>, where <math>M</math> is a pre-conditioner.
## <math>\hat {\bold v} = A\hat {\bold p}</math>
## <math>\alpha^{(i)} = \rho^{(i-1)} / \tilde {\bold r}^T \hat {\bold v}</math>
## <math>{\bold q}^{(i)} = {\bold u}^{(i)} - \alpha^{(i)}\hat {\bold v}</math>
## Solve <math>M\hat {\bold u} = {\bold u}^{(i)} + {\bold q}^{(i)}</math>
## <math>{\bold x}^{(i)} = {\bold x}^{(i-1)} + \alpha^{(i)} \hat {\bold u}</math>
## <math>\hat {\bold q} = A\hat {\bold u}</math>
## <math>{\bold r}^{(i)} = {\bold r}^{(i-1)} - \alpha^{(i)}\hat {\bold q}</math>
## Check for convergence: if there is convergence, end the loop and return the result
== See also ==
* [[Biconjugate gradient method]]
* [[Biconjugate gradient stabilized method]]
* [[Generalized minimal residual method]]
== References ==
{{
[[Category:Numerical linear algebra]]
[[Category:Gradient methods]]
|