Content deleted Content added
No edit summary |
MichaelMaggs (talk | contribs) Adding local short description: "Concept in numerical linear algebra", overriding Wikidata description "sparse approximation of the LU factorization often used as a preconditioner" |
||
(6 intermediate revisions by 6 users not shown) | |||
Line 1:
{{Short description|Concept in numerical linear algebra}}
In [[numerical linear algebra]], an '''incomplete LU factorization''' (abbreviated as '''ILU''') of a [[matrix (mathematics)|matrix]] is a [[sparse matrix|sparse]] approximation of the [[LU factorization]] often used as a [[preconditioner]].
Line 15 ⟶ 16:
G(A) := \left\lbrace (i,j) \in \N^2 : A_{ij} \neq 0 \right\rbrace \,,
</math>
which is used to define the conditions a ''sparsity
:<math>
S \subset \left\lbrace 1, \dots , n \right\rbrace^2
Line 30 ⟶ 31:
* <math> L,U </math> are zero outside of the sparsity pattern: <math> L_{ij}=U_{ij}=0 \quad \forall \; (i,j) \notin S </math>
* <math> R \in \R^{n \times n} </math> is zero within the sparsity pattern: <math> R_{ij}=0 \quad \forall \; (i,j) \in S </math>
is called an '''incomplete LU decomposition''' (
The sparsity pattern of ''L'' and ''U'' is often chosen to be the same as the sparsity pattern of the original matrix ''A''. If the underlying matrix structure can be referenced by pointers instead of copied, the only extra memory required is for the entries of ''L'' and ''U''. This preconditioner is called ILU(0).
== Stability ==
Concerning the stability of the ILU the following theorem was proven by Meijerink and van der Vorst.<ref>{{Cite journal|last=Meijerink|first=J. A.|last2=Vorst|first2=Van Der|last3=A|first3=H.|date=1977|title=An iterative solution method for linear systems of which the coefficient matrix is a symmetric 𝑀-matrix|url=
Let <math> A </math> be an [[M-matrix]], the (complete) LU decomposition given by <math> A=\hat{L} \hat{U} </math>, and the ILU by <math> A=LU-R </math>.
Line 51 ⟶ 52:
More accurate ILU preconditioners require more memory, to such an extent that eventually the running time of the algorithm increases even though the total number of iterations decreases. Consequently, there is a cost/accuracy trade-off that users must evaluate, typically on a case-by-case basis depending on the family of linear systems to be solved.
== See also ==
|