Tridiagonal matrix: Difference between revisions

Content deleted Content added
Bluelink 1 book for verifiability.) #IABot (v2.0) (GreenC bot
OAbot (talk | contribs)
m Open access bot: url-access updated in citation with #oabot.
 
(42 intermediate revisions by 34 users not shown)
Line 1:
In{{Short [[lineardescription|Matrix algebra]], a ''tridiagonal matrix'' is a [[band matrix]] that haswith nonzero elements on the [[main diagonal]], the first diagonal below this, and the first diagonaldiagonals above theand main diagonalbelow only.it}}
In [[linear algebra]], a '''tridiagonal matrix''' is a [[band matrix]] that has nonzero elements only on the [[main diagonal]], the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following [[matrix (mathematics)|matrix]] is [[Tridiagonal matrix algorithm|tridiagonal]]:
 
For example, the following matrix is tridiagonal:
:<math>\begin{pmatrix}
1 & 4 & 0 & 0 \\
Line 9 ⟶ 8:
\end{pmatrix}.</math>
 
The [[determinant]] of a tridiagonal matrix is given by the '''[[Continuant (mathematics)|continuant]]''' of its elements.<ref>{{cite book | author=Thomas Muir | authorlinkauthor-link=Thomas Muir (mathematician) | title=A treatise on the theory of determinants | url=https://archive.org/details/treatiseontheory0000muir | url-access=registration | year=1960 | publisher=[[Dover Publications]] | pages=[https://archive.org/details/treatiseontheory0000muir/page/516 516–525] }}</ref>
 
An [[orthogonal transformation]] of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the [[Lanczos algorithm]].
 
==Properties==
A tridiagonal matrix is a matrix that is both upper and lower [[Hessenberg matrix]].<ref>{{cite book|first1=Roger A.| last1=Horn | first2=Charles R. | last2=Johnson | title = Matrix Analysis | publisher= Cambridge University Press |year= 1985 |isbn = 0521386322 | page=28}}</ref> In particular, a tridiagonal matrix is a [[direct sum]] of ''p'' 1-by-1 and ''q'' 2-by-2 matrices such that {{nowrap|1=''p'' + ''q''/2 = ''n''}} — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily [[Symmetric matrix|symmetric]] or [[Hermitian matrix|Hermitian]], many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix ''A'' satisfies ''a''<sub>''k'',''k''+1</sub> ''a''<sub>''k''+1,''k''</sub> > 0 for all ''k'', so that the signs of its entries are symmetric, then it is [[similar (linear algebra)|similar]] to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its [[eigenvalue]]s are real. If we replace the strict inequality by ''a''<sub>''k'',''k''+1</sub> ''a''<sub>''k''+1,''k''</sub> &ge; 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.<ref>Horn & Johnson, page 174</ref>
 
The [[Set (mathematics)|set]] of all ''n &times; n'' tridiagonal matrices forms a ''3n-2''
Line 22 ⟶ 21:
 
===Determinant===
{{Main|continuantContinuant (mathematics)}}
The [[determinant]] of a tridiagonal matrix ''A'' of order ''n'' can be computed from a three-term [[recurrence relation]].<ref>{{Cite journal | last1 = El-Mikkawy | first1 = M. E. A. | title = On the inverse of a general tridiagonal matrix | doi = 10.1016/S0096-3003(03)00298-4 | journal = Applied Mathematics and Computation | volume = 150 | issue = 3 | pages = 669–679 | year = 2004 | pmid = | pmc = }}</ref> Write ''f''<sub>1</sub>&nbsp;=&nbsp;|''a''<sub>1</sub>|&nbsp;=&nbsp;''a''<sub>1</sub> (i.e., ''f''<sub>1</sub> is the determinant of the 1 by 1 matrix consisting only of ''a''<sub>1</sub>), and let
 
:<math>f_n = \begin{vmatrix}
Line 51 ⟶ 50:
\end{pmatrix}</math>
 
is given by<!-- (T^{-1})_{ij} = \begin{cases}
(-1)^{j-i}b_i \cdots b_{j-1} \theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i < j\\
\theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i = j\\
(-1)^{i-j}c_j \cdots c_{i-1} \theta_{j-1} \phi_{i+1}/\theta_n & \text{ if } i > j\\
\end{cases} Correction!? -->
 
:<math>(T^{-1})_{ij} = \begin{cases}
Line 67 ⟶ 70:
:<math>\phi_i = a_i \phi_{i+1} - b_i c_i \phi_{i+2} \qquad i=n-1,\ldots,1</math>
 
with initial conditions ''ϕ''<sub>''n''+1</sub>&nbsp;=&nbsp;1 and ''ϕ''<sub>''n''</sub>&nbsp;=&nbsp;''a<sub>n</sub>''.<ref>{{Cite journal | last1 = Da Fonseca | first1 = C. M. | doi = 10.1016/j.cam.2005.08.047 | title = On the eigenvalues of some tridiagonal matrices | journal = Journal of Computational and Applied Mathematics | volume = 200 | pages = 283–286 | year = 2007 | pmiddoi-access = | pmc =free }}</ref><ref>{{Cite journal | last1 = Usmani | first1 = R. A. | doi = 10.1016/0024-3795(94)90414-6 | title = Inversion of a tridiagonal jacobi matrix | journal = Linear Algebra and itsIts Applications | volume = 212-213 | pages = 413–414 | year = 1994 | pmiddoi-access = | pmc =free }}</ref>
 
Closed form solutions can be computed for special cases such as [[symmetric matrix|symmetric matrices]] with all diagonal and off-diagonal elements equal<ref>{{Cite journal | last1 = Hu | first1 = G. Y. | last2 = O'Connell | first2 = R. F. | doi = 10.1088/0305-4470/29/7/020 | title = Analytical inversion of symmetric tridiagonal matrices | journal = Journal of Physics A: Mathematical and General | volume = 29 | issue = 7 | pages = 1511 | year = 1996 | pmidbibcode = | pmc =1996JPhA...29.1511H }}</ref> or [[Toeplitz matrices]]<ref>{{Cite journal | last1 = Huang | first1 = Y. | last2 = McColl | first2 = W. F. | doi = 10.1088/0305-4470/30/22/026 | title = Analytical inversion of general tridiagonal matrices | journal = Journal of Physics A: Mathematical and General | volume = 30 | issue = 22 | pages = 7919 | year = 1997 | pmidbibcode = | pmc =1997JPhA...30.7919H }}</ref> and for the general case as well.<ref>{{Cite journal | last1 = Mallik | first1 = R. K. | doi = 10.1016/S0024-3795(00)00262-7 | title = The inverse of a tridiagonal matrix | journal = Linear Algebra and itsIts Applications | volume = 325 | pages = 109–139 | year = 2001 | pmidissue = 1–3 | pmcdoi-access = free }}</ref><ref>{{Cite journal | last1 = Kılıç | first1 = E. | doi = 10.1016/j.amc.2007.07.046 | title = Explicit formula for the inverse of a tridiagonal matrix by backward continued fractions | journal = Applied Mathematics and Computation | volume = 197 | pages = 345–357 | year = 2008 | pmid = | pmc = }}</ref>
 
In general, the inverse of a tridiagonal matrix is a [[semiseparable matrix]] and vice versa.<ref name="VandebrilBarel2008">{{cite book|author1=Raf Vandebril|author2=Marc Van Barel|author3=Nicola Mastronardi|title=Matrix Computations and Semiseparable Matrices. Volume I: Linear Systems|year=2008|publisher=JHU Press|isbn=978-0-8018-8714-7|at=Theorem 1.38, p. 41}}</ref> The inverse of a symmetric tridiagonal matrix can be written as a [[single-pair matrix]] (a.k.a. ''generator-representable semiseparable matrix'') of the form<ref name="Meurant1992">{{cite journal |last1=Meurant |first1=Gerard |title=A review on the inverse of symmetric tridiagonal and block tridiagonal matrices |journal=SIAM Journal on Matrix Analysis and Applications |date=1992 |volume=13 |issue=3 |pages=707–728 |doi=10.1137/0613045 |url=https://doi.org/10.1137/0613045|url-access=subscription }}</ref><ref>{{cite journal |last1=Bossu |first1=Sebastien |title=Tridiagonal and single-pair matrices and the inverse sum of two single-pair matrices |journal=Linear Algebra and Its Applications |date=2024 |volume=699 |pages=129–158 |doi=10.1016/j.laa.2024.06.018 |url=https://authors.elsevier.com/a/1jOTP5YnCtZEc|arxiv=2304.06100 }}</ref>
 
<math>\begin{pmatrix}
\alpha_1 & -\beta_1 \\
-\beta_1 & \alpha_2 & -\beta_2 \\
& \ddots & \ddots & \ddots & \\
& & \ddots & \ddots & -\beta_{n-1} \\
& & & -\beta_{n-1} & \alpha_n
\end{pmatrix}^{-1} =
\begin{pmatrix}
a_1 b_1 & a_1 b_2 & \cdots & a_1 b_n \\
a_1 b_2 & a_2 b_2 & \cdots & a_2 b_n \\
\vdots & \vdots & \ddots & \vdots \\
a_1 b_n & a_2 b_n & \cdots & a_n b_n
\end{pmatrix}
= \left( a_{\min(i,j)} b_{\max(i,j)} \right)
</math>
 
where <math>\begin{cases} \displaystyle a_i = \frac{\beta_{i} \cdots \beta_{n-1}}{\delta_i \cdots \delta_n\,b_n}
\\ \displaystyle
b_i = \frac{\beta_1 \cdots \beta_{i-1}}{d_1 \cdots d_i}\end{cases}</math>
with <math>\begin{cases}
d_n = \alpha_n,\quad d_{i-1} = \alpha_{i-1} - \frac{\beta_{i-1}^2}{d_{i}}, & i = n, n-1, \cdots, 2,
\\
\delta_1 = \alpha_1, \quad
\delta_{i+1} = \alpha_{i+1} - \frac{\beta_{i}^2}{\delta_{i}}, & i = 1, 2, \cdots, n-1.
\end{cases}
</math>
 
===Solution of linear system===
{{Main|tridiagonal matrix algorithm}}
A system of equations ''Ax''&nbsp;=&nbsp;''b'' for&nbsp;<math>b\in \R^n</math> can be solved by an efficient form of Gaussian elimination when ''A'' is tridiagonal called [[tridiagonal matrix algorithm]], requiring ''O''(''n'') operations.<ref>{{cite book|first1=Gene H.|last1=Golub|authorlink1author-link1=Gene H. Golub |first2=Charles F. |last2=Van Loan|authorlink2author-link2=Charles F. Van Loan| title =Matrix Computations| edition=3rd|publisher=The Johns Hopkins University Press|year=1996|isbn=0-8018-5414-8}}</ref>
 
===Eigenvalues===
 
When a tridiagonal matrix is also [[Toeplitz matrix|Toeplitz]], there is a simple closed-form solution for its eigenvalues, namely:<ref>{{Cite journal | doi = 10.1002/nla.1811| title = Tridiagonal Toeplitz matrices: Properties and novel applications| journal = Numerical Linear Algebra with Applications| volume = 20| issue = 2| pages = 302| year = 2013| last1 = Noschese | first1 = S. | last2 = Pasquini | first2 = L. | last3 = Reichel | first3 = L. }}</ref><ref>This can also be written as <math> a -+ 2 \sqrt{bc} \cos(k \pi / {(n+1)}) </math> because <math> \cos(x) = -\cos(\pi-x) </math>, as is done in: {{Cite journal | last1 = Kulkarni | first1 = D. | last2 = Schmidt | first2 = D. | last3 = Tsui | first3 = S. K. | title = Eigenvalues of tridiagonal pseudo-Toeplitz matrices | doi = 10.1016/S0024-3795(99)00114-7 | journal = Linear Algebra and itsIts Applications | volume = 297 | pages = 6363–80 | year = 1999 | pmidissue = | pmc =1–3 | url = https://hal.archives-ouvertes.fr/hal-01461924/file/KST.pdf }}</ref>
 
:<math> a + 2 \sqrt{bc} \cos \left (\frac{k\pi}{n+1} \right ), \qquad k=1, \ldots, n. </math>
 
:<math> a +- 2 \sqrt{bc} \cos \left (\frac{k\pi}{n+1} \right ), \qquad k=1, \ldots, n. </math>
A real [[symmetric matrix|symmetric]] tridiagonal matrix has real eigenvalues, and all the eigenvalues are [[Eigenvalues and eigenvectors#Algebraic multiplicity|distinct (simple)]] if all off-diagonal elements are nonzero.<ref>{{Cite book | last1 = Parlett | first1 = B.N. | title = The Symmetric Eigenvalue Problem | year = 1980 | publisher = Prentice Hall, Inc. }}</ref> Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring <math>O(n^2)</math> operations for a matrix of size <math>n\times n</math>, although fast algorithms exist which (without parallel computation) require only <math>O(n\log n)</math>.<ref>{{Cite journal |last1 = Coakley |first1= E.S. |last2=Rokhlin | first2=V. | title =A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices | doi = 10.1016/j.acha.2012.06.003 |journal = Applied and Computational Harmonic Analysis |volume = 34 |issue = 3 |pages = 379–414 |year =2012 }}</ref>
 
A real [[symmetric matrix|symmetric]] tridiagonal matrix has real eigenvalues, and all the eigenvalues are [[Eigenvalues and eigenvectors#Algebraic multiplicity|distinct (simple)]] if all off-diagonal elements are nonzero.<ref>{{Cite book | last1 = Parlett | first1 = B.N. | title = The Symmetric Eigenvalue Problem | orig-year = 1980 | publisher =SIAM Prentice|date=1997 Hall,|oclc= Inc.228147822 |series=Classics in applied mathematics |volume=20 |isbn=0-89871-402-8 }}</ref> Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring <math>O(n^2)</math> operations for a matrix of size <math>n\times n</math>, although fast algorithms exist which (without parallel computation) require only <math>O(n\log n)</math>.<ref>{{Cite journal |last1 = Coakley |first1= E.S. |last2=Rokhlin | first2=V. | title =A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices | doi = 10.1016/j.acha.2012.06.003 |journal = [[Applied and Computational Harmonic Analysis]] |volume = 34 |issue = 3 |pages = 379–414 |year =2012 |doi-access = free }}</ref>
As a side note, an '''unreduced''' symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal.<ref>{{cite book |last1=Dhillon |first1=Inderjit Singh |title=A New O(n 2 ) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem |page=8 |url=http://www.cs.utexas.edu/~inderjit/public_papers/thesis.pdf}}</ref>
 
As a side note, an '''unreduced''' symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal.<ref>{{cite bookthesis |last1=Dhillon |first1=Inderjit Singh |title=A New O(n <sup>2 </sup>) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem |page=8 |type=PhD |publisher=University of California, Berkeley |id=CSD-97-971, ADA637073 |date=1997 |url=http://www.cs.utexas.edu/~inderjit/public_papers/thesis.pdf}}</ref>
For ''unsymmetric'' tridiagonal matrices one can compute the eigendecomposition using a [[Tridiagonal_matrix#Similarity_to_symmetric_tridiagonal_matrix|similarity transformation]].
 
=== Similarity to symmetric tridiagonal matrix ===
For ''unsymmetric'' or ''nonsymmetric'' tridiagonal matrices one can compute the eigendecomposition using a [[Tridiagonal_matrix#Similarity_to_symmetric_tridiagonal_matrix|similarity transformation]].
Given a real tridiagonal, ''unsymmetic''nonsymmetric matrix
:<math>
T = \begin{pmatrix}
Line 101 ⟶ 130:
</math>
where <math>b_i \neq c_i </math>.
Assume that each product of off-diagonal entries is {{em|strictly}} positive <math>b_i c_i > 0 </math> and define a transformation matrix <math>D</math> by<ref name=kreer_operator>{{Cite journal | last1 = Kreer | first1 = M. | title = Analytic birth-death processes: a Hilbert space approach | doi = 10.1016/0304-4149(94)90112-0 | journal = Stochastic Processes and Their Applications | volume = 49 | issue = 1 | pages = 65–74 | year = 1994 }}</ref>
 
Assume that each product of off-diagonal entries is ''strictly'' positive <math>b_i c_i > 0 </math> and define a transformation matrix <math>D</math> by
:<math>
D := \operatorname{diag}(\delta_1 , \dots, \delta_n)
Line 114 ⟶ 142:
</math>
 
The [[Matrix_similarity|similarity transformation]] <math>D^{-1} T D </math> yields a ''symmetric'' tridiagonal matrix <math>J</math> by:<ref>{{Citecite web |first=Gérard |last=Meurant |title=Tridiagonal matrices |date=October 2008 |url=http://www.math.hkbu.edu.hk/ICM/LecturesAndSeminars/08OctMaterials/1/Slide3.pdf |titlevia=www.math.hkbu.edu.hkInstitute mathfor lecture|last=|first=|date=|website=|url-status=live|archive-url=|archive-date=|access-date=Computational Mathematics, Hong Kong Baptist University}}</ref><ref tridiagonal matrix <math>J<name=kreer_operator/math> by
:<math>
J:=D^{-1} T D
= \begin{pmatrix}
a_1 & \sgn b_1 \, \sqrt{b_1 c_1} \\
\sgn b_1 \, \sqrt{b_1 c_1} & a_2 & \sgn b_2 \, \sqrt{b_2 c_2} \\
& \sgn b_2 \, \sqrt{b_2 c_2} & \ddots & \ddots \\
& & \ddots & \ddots & \sgn b_{n-1} \, \sqrt{b_{n-1} c_{n-1}} \\
& & & \sgn b_{n-1} \, \sqrt{b_{n-1} c_{n-1}} & a_n
\end{pmatrix} \,.
</math>
 
Note that <math>T</math> and <math>J</math> have the same eigenvalues.
 
==Computer programming==
A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to ''tridiagonal'' form. So, many [[eigenvalue algorithm]]s, when applied to a Hermitian matrix, reduce the input Hermitian matrix to (symmetric real) tridiagonal form as a first step.<ref>{{Cite journal |last1=Eidelman |first1=Yuli |last2=Gohberg |first2=Israel |last3=Gemignani |first3=Luca |date=2007-01-01 |title=On the fast reduction of a quasiseparable matrix to Hessenberg and tridiagonal forms |journal=Linear Algebra and Its Applications |language=en |volume=420 |issue=1 |pages=86–101 |doi=10.1016/j.laa.2006.06.028 |issn=0024-3795|doi-access=free }}</ref>
 
A ''tridiagonal matrix'' can also be stored more efficiently than a general matrix by using a special [[matrix representation|storage scheme]]. For instance, the [[LAPACK]] [[Fortran]] package stores an unsymmetric tridiagonal matrix of order ''n'' in three one-dimensional arrays, one of length ''n'' containing the diagonal elements, and two of length ''n'' &minus; 1 containing the [[subdiagonal]] and [[superdiagonal]] elements.
 
==Applications==
The discretization in space of the one-dimensional diffusion or [[heat equation]]
:<math>\frac{\partial u(t,x)}{\partial t} = \alpha \frac{\partial^2 u(t,x)}{\partial x^2}</math>
using second order central [[finite differences]] results in
 
:<math>
\begin{pmatrix}
\frac{\partial u_{1}(t)}{\partial t} \\
\frac{\partial u_{2}(t)}{\partial t} \\
\vdots \\
\frac{\partial u_{N}(t)}{\partial t}
\end{pmatrix}
= \frac{\alpha}{\Delta x^2} \begin{pmatrix}
-2 & 1 & 0 & \ldots & 0 \\
1 & -2 & 1 & \ddots & \vdots \\
0 & \ddots & \ddots & \ddots & 0 \\
\vdots & & 1 & -2 & 1 \\
0 & \ldots & 0 & 1 & -2
\end{pmatrix}
\begin{pmatrix}
u_{1}(t) \\
u_{2}(t) \\
\vdots \\
u_{N}(t) \\
\end{pmatrix}
</math>
with discretization constant <math>\Delta x</math>. The matrix is tridiagonal with <math>a_{i}=-2</math> and <math>b_{i}=c_{i}=1</math>. Note: no boundary conditions are used here.
 
==See also==
* [[Pentadiagonal matrix]]
* [[Jacobi matrix (operator)]]
 
==Notes==
Line 141 ⟶ 197:
==External links==
* [http://www.netlib.org/lapack/lug/node125.html Tridiagonal and Bidiagonal Matrices] in the LAPACK manual.
* {{cite journal |author=Moawwad El-Mikkawy, Abdelrahman Karawia |title=Inversion of general tridiagonal matrices |journal=Applied Mathematics Letters |volume=19 |issue=8 |year=2006 |pages=712–720 |doi=10.1016/j.aml.2005.11.012 |url=http://app2.mans.edu.eg/eulc/Libraries/itemsAttach/IssueArticle/1037592/papermik.pdf |urldoi-statusaccess=dead |archiveurl=https://web.archive.org/web/20110720131242/http://app2.mans.edu.eg/eulc/Libraries/itemsAttach/IssueArticle/1037592/papermik.pdf |archivedate=2011-07-20free }}
* [http://www.cs.utexas.edu/users/flame/pubs/flawn53.pdf High performance algorithms] for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form
* [https://web.archive.org/web/20140310122757/http://michal.is/projects/tridiagonal-system-solver-sor-c/ Tridiagonal linear system solver] in C++
 
{{Matrix classes}}
 
[[Category:Sparse matrices]]