Talk:Lanczos algorithm: Difference between revisions

Content deleted Content added
Line 43:
 
:'''Input''' a [[Hermitian matrix]] <math>A</math> of size <math>n \times n</math>, and optionally a number of iterations <math>m</math> (as default let <math>m=n</math>)
:* Strictly speaking, the algorithm does not need access to the the explicit matrix, but only a function <math>v \mapsto A v</math> that computes the product of the matrix by an arbitrary vector. This function is called at most <math>m</math> times.
:'''Output''' an <math>n \times m</math> matrix <math>V</math> with orthonormal columns and a tridiagonal real symmetric matrix <math>T = V^* A V</math> of size <math>m \times m</math>. If <math>m=n</math> then <math>V</math> is [[unitary matrix|unitary]] and <math> A = V T V^* = A</math>.
:'''Warning''' The Lanczos iteration is prone to numerical instability. When executed in non-exact arithmetic, additional measures (as outlined further down) should be taken to ensure validity of the results.
:# Let <math>v_1 \in \mathbb{C}^n</math> be an arbitrary vector with [[Euclidean norm]] <math>1</math>.
:# Abbreviated initial iteration step:
:## Let <math> w_1' = A v_1 </math>.
Line 51 ⟶ 52:
:## Let <math> w_1 = w_1' - \alpha_1 v_1 </math>.
:# For <math> j=2,\dots,m </math> do:
:## Let <math> \beta_j = \left\| w_{j-1} \right\| </math> (also [[Euclidean norm]]).
:## If <math> \beta_j \neq 0 </math> then let <math> v_j = w_{j-1} / \beta_j </math>,
:##: else pick as <math>v_j</math> an arbitrary vector with Euclidean norm <math>1</math> that is orthogonal to all of <math> v_1,\dots,v_{j-1} </math>.
:## Let <math> w_j' = A v_j </math>.
:## Let <math> \alpha_j = w_j'^* v_j </math>.
Line 66 ⟶ 67:
\end{pmatrix}</math>.
:'''Note''' <math> A v_j = w_j' = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} </math> for <math> 1 < j < m </math>.
 
Not counting the matrix–vector multiplication, each iteration does <math>O(n)</math> arithmetical operations. If <math>d</math> is the average number of nonzero elements in a row of <math>A</math>, then the matrix–vector multiplication can be done in <math>O(dn)</math> arithmetical operations. Total complexity is thus <math>O(dmn)</math>, or <math>O(dn^2)</math> if <math>m=n</math>; the Lanczos algorithm can be really fast for sparse matrices. Schemes for improving numerical stability are typically judged against this high performance.
 
The vector <math> w_j' </math> is not used after <math> w_j </math> is computed, and the vector <math> w_j </math> is not used after <math> v_{j+1} </math> is computed. Hence one may use the same storage for all three. Likewise, if only the tridiagonal matrix <math>T</math> is sought, then the raw iteration does not need <math> v_{j-1} </math> after having computed <math> w_j </math>, although some schemes for improving the numerical stability would need it later on.
[[Special:Contributions/130.243.68.122|130.243.68.122]] ([[User talk:130.243.68.122|talk]]) 12:39, 24 May 2017 (UTC)
 
=== Application for eigendecomposition ===