Content deleted Content added
m wording changes. |
m added references |
||
Line 3:
'''Bartels-Stewart Algorithm'''
In [[numerical linear algebra]], the '''Bartels-Stewart''' algorithm is used to numerically solve the [[Sylvester equation|Sylvester matrix equation]] <math> AX - XB = C</math>. Developed by R.H. Bartels and G.W. Stewart in 1971<ref name=":0">{{Cite journal|last=Bartels|first=R. H.|last2=Stewart|first2=G. W.|date=1972|title=Solution of the matrix equation AX + XB = C [F4]|url=http://dl.acm.org/citation.cfm?id=361573.361582|journal=Communications of the ACM|volume=15|issue=9|pages=820–826|doi=10.1145/361573.361582|issn=0001-0782}}</ref>, it was the first [[numerical stability|numerically stable]] method that could by systematically applied to solve such equations. The algorithm works by using the [[Schur decomposition|real Schur decompositions]] of <math>A</math> and <math>B</math> to transform <math> AX - XB = C</math> into a triangular system that can then be solved using forward or backward substitution. In 1979, [[Gene H. Golub|G. Golub]], [[Charles F. Van Loan|C. Van Loan]] and S. Nash introduced an improved version of the algorithm<ref name=":1">{{Cite journal|last=Golub|first=G.|last2=Nash|first2=S.|last3=Loan|first3=C. Van|date=1979|title=A Hessenberg-Schur method for the problem AX + XB= C|url=https://ieeexplore.ieee.org/document/1102170/|journal=IEEE Transactions on Automatic Control|volume=24|issue=6|pages=909–913|doi=10.1109/TAC.1979.1102170|issn=0018-9286}}</ref>, known as the Hessenberg-Schur algorithm. It remains a standard approach for solving [[Sylvester equation| Sylvester equations]] when <math>X</math>is of small to moderate size.
== The algorithm ==
Let <math>X, C \in \mathbb{R}^{m \times n}</math>, and assume that the eigenvalues of <math>A</math> are distinct from the eigenvalues of <math>B</math>. Then, the matrix <math> AX - XB = C</math> has a unique solution. The Bartels-Stewart algorithm computes <math>X</math> by applying the following steps<ref name=":1" />:
1.Compute the [[Schur decomposition|real Schur decompositions]]
Line 27:
=== Computational cost ===
Using the [[QR algorithm]], the [[Schur decomposition| real Schur decompositions]] in step 1 require approximately <math>10(m^3 + n^3)</math> flops, so that the overall computational cost is <math>10(m^3 + n^3) + 2.5(mn^2 + nm^2)</math><ref name=":1" />.
=== Simplifications and special cases ===
In the special case where <math>B=-A^T</math> and <math>C</math> is symmetric, the solution <math>X</math> will also be symmetric. This symmetry can be exploited so that <math>Y</math> is found more efficiently in step 3 of the algorithm<ref name=":0" />.
If <math>A</math> and <math>B</math> are both symmetric matrices, then <math>R</math> and <math>S</math> from step 1 are diagonal<ref>{{Cite book|url=https://www.worldcat.org/oclc/34515797|title=Matrix computations|last=1932-2007.|first=Golub, Gene H. (Gene Howard),|date=1996|publisher=Johns Hopkins University Press|others=Van Loan, Charles F.|isbn=080185413X|edition=3rd ed|___location=Baltimore|oclc=34515797}}</ref>. The solution matrix <math>Y</math>in step 3 can therefore be expressed explicitly as <math>Y_{jk} = F_{jk}/(R_{jj} - S_{kk})</math>.
== The Hessenberg-Schur algorithm ==
The Hessenberg-Schur algorithm<ref name=":1" /> replaces the decomposition <math>R = U^TAU</math> in step 1 with the decomposition <math>H = Q^TAQ</math>, where <math>H</math> is an [[Hessenberg matrix| upper-Hessenberg matrix]]. This leads to a system of the form <math> HY - YS^T = F</math> that can be solved using forward substitution. The advantage of this approach is that <math>H = Q^TAQ</math> can be found using [[Householder transformation| Householder reflections]] at a cost of <math>(5/3)m^3</math> flops, compared to the <math>10m^3</math> flops required to compute the real Schur decomposition of <math>A</math>.
== Software and implementation ==
The subroutines required for the Hessenberg-Schur variant of the Bartels-Stewart algorithm are implemented in the [http://slicot.org/ SLICOT library]. These are used in the [https://www.mathworks.com/help/control/ref/lyap.html MATLAB control system toolbox].
== Alternative approaches ==
For large systems, the <math>\mathcal{O}(m^3 + n^3)</math> cost of the Bartels-Stewart algorithm can be prohibitive. When <math>A</math>and <math>B</math> are sparse or structured, so that linear solves and matrix vector multiplies involving them are cheap, iterative algorithms can potentially perform better. These include projection-based methods, which use Krylov subspace iterations, methods based on the alternating-implicit direction (ADI) iteration, and hybridizations that involve both projection and ADI<ref>{{Cite journal|last=Simoncini|first=V.|date=2016|title=Computational Methods for Linear Matrix Equations|url=https://epubs.siam.org/action/captchaChallenge?redirectUri=%2Fdoi%2F10.1137%2F130912839|journal=SIAM Review|language=en-US|volume=58|issue=3|pages=377–441|doi=10.1137/130912839|issn=0036-1445}}</ref>. Iterative methods can also be used to directly construct low rank approximations to <math>X</math> when solving <math>AX-XB = C</math>. This is important when, for instance, <math>X</math> is too large to be stored in memory explicitly.
== References ==
{{Reflist}}
<!--- Categories --->
|