Talk:Lanczos algorithm: Difference between revisions

Content deleted Content added
Line 41:
=== Rewrite of algorithm ===
Thinking some more about this, I find it desirable to modify the way the algorithm is stated — partly to address the case <math>\beta_{j+1}=0</math>, and partly to do something about the "changes to" remark at the end, which is confusing in that no variable assigned in the algorithm is ever changed. It turns out the remark is boilerplate text in the {{tl|algorithm-end}} template, and there is no option to omit it. Since [[Wikipedia:WikiProject_Computer_science/Manual_of_style#Algorithms_and_data_structures]] does not recommend using that template, and instead recommends using (numbered) lists with explicit '''input''' and '''output''' headings, a rewrite from scratch seems in order.
 
:'''Input''' a [[Hermitian matrix]] <math>A</math> of size <math>n \times n</math>, and optionally a number of iterations <math>m</math> (as default let <math>m=n</math>)
:'''Output''' an <math>n \times m</math> matrix <math>V</math> with orthonormal columns and a tridiagonal real symmetric matrix <math>T = V^* A V</math> of size <math>m \times m</math>. If <math>m=n</math> then <math>V</math> is [[unitary matrix|unitary]] and <math>V T V^* = A</math>.
:'''Warning''' The Lanczos iteration is prone to numerical instability. When executed in non-exact arithmetic, additional measures (as outlined further down) should be taken to ensure validity of the results.
:# Let <math>v_1 \in \mathbb{C}^n</math> be an arbitrary vector with norm <math>1</math>.
:# Abbreviated initial iteration step:
Line 63 ⟶ 65:
0 & & & & \beta_m & \alpha_m \\
\end{pmatrix}</math>.
:'''Note''' <math> A v_j = w_j' = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} </math> for <math> 1 < j < m </math>.
 
The vector <math> w_j' </math> is not used after <math> w_j </math> is computed, and the vector <math> w_j </math> is not used after <math> v_{j+1} </math> is computed. Hence one may use the same storage for all three. Likewise, if only the tridiagonal matrix <math>T</math> is sought, then the raw iteration does not need <math> v_{j-1} </math> after having computed <math> w_j </math>, although some schemes for improving the numerical stability would need it later on.
 
=== Application for eigendecomposition ===
The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. None the less, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition.
First observe that when <math>T</math> is <math>n \times n</math>, it is [[Matrix similarity|similar]] to <math>A</math>: if <math>\lambda</math> is an eigenvalue of <math>T</math> then it is also an eigenvalue of <math>A</math>, and if <math> T x = \lambda x </math> (<math>x</math> is an eigenvector of <math>T</math>) then <math> y = V x </math> is the corresponding eigenvetor of <math>A</math> (since <math> A y = A V x = V T V^* V x = V T I x = V T x = V (\lambda x) = \lambda V x = \lambda y</math>). Thus the Lanczos algorithm transforms the eigendecomposition problem for <math>A</math> into the eigendecomposition problem for <math>T</math>.
# For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms.
 
Hmm… Better check that the orthogonalisation is stated correctly even for complex elements. [[Special:Contributions/130.243.68.122|130.243.68.122]] ([[User talk:130.243.68.122|talk]]) 1512:5802, 2324 May 2017 (UTC)
 
== Define variables ==