Conjugate gradient squared method: Difference between revisions

Content deleted Content added
Background: Further explanation
Moved application to background section
Line 10:
 
In [[numerical linear algebra]], the '''conjugate gradient squared method (CGS)''' is an [[iterative method|iterative]] algorithm for solving [[systems of linear equations]] of the form <math>A{\bold x} = {\bold b}</math>, particularly in cases where computing the [[transpose]] <math>A^T</math> is impractical.<ref>{{cite web|title=Conjugate Gradient Squared Method|author1=Noel Black|author2=Shirley Moore|publisher=[[MathWorld|Wolfram Mathworld]]|url=https://mathworld.wolfram.com/ConjugateGradientSquaredMethod.html}}</ref> The CGS method was developed as an improvement to the [[Biconjugate gradient method]].<ref>{{cite web|title=cgs|author=Mathworks|url=https://au.mathworks.com/help/matlab/ref/cgs.html}}</ref><ref>{{cite book|author=[[Henk van der Vorst]]|title=Iterative Krylov Methods for Large Linear Systems|chapter=Bi-Conjugate Gradients|year=2003|publisher=Cambridge University Press |isbn=0-521-81828-1}}</ref><ref>{{cite journal|title=CGS, A Fast Lanczos-Type Solver for Nonsymmetric Linear systems|author=Peter Sonneveld|journal=SIAM Journal on Scientific and Statistical Computing|volume=10|issue=1|pages=36–52|date=1989|url=https://www.proquest.com/docview/921988114|url-access=limited|doi=10.1137/0910004|id={{ProQuest|921988114}} }}</ref>
 
As with other methods for solving matrix-vector equations, the CGS method can be used to solve optimisation problems, such as power-flow analysis.
 
== Background ==
A system of linear equations <math>A{\bold x} = {\bold b}</math> consists of a known [[Matrix (mathematics)|matrix]] <math>A</math> and a known [[Vector (mathematics)|vector]] <math>{\bold b}</math>. To solve the system is to find the value of the unknown vector <math>{\bold x}</math>. A direct method for solving a system of linear equations is to take the inverse of the matrix <math>A</math>, then calculate <math>\bold x = A^{-1}\bold b</math>. However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess <math>\bold x^{(0)}</math>, and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution.
 
As with other methods for solving matrix-vector equations, the CGS method can be used to solve optimisation problems, such as power-flow analysis.
 
== The Algorithm ==