Content deleted Content added
HerrHartmuth (talk | contribs) →Definition: included the mathematical definition |
MichaelMaggs (talk | contribs) Adding local short description: "Concept in numerical linear algebra", overriding Wikidata description "sparse approximation of the LU factorization often used as a preconditioner" |
||
(12 intermediate revisions by 11 users not shown) | |||
Line 1:
{{Short description|Concept in numerical linear algebra}}
In [[numerical linear algebra]], an '''incomplete LU factorization''' (abbreviated as '''ILU''') of a [[matrix (mathematics)|matrix]] is a [[sparse matrix|sparse]] approximation of the [[LU factorization]] often used as a [[preconditioner]].
Line 6 ⟶ 7:
For a typical sparse matrix, the LU factors can be much less sparse than the original matrix — a phenomenon called ''fill-in''.
The memory requirements for using a direct solver can then become a bottleneck in solving linear systems. One can combat this problem by using fill-reducing reorderings of the matrix's unknowns, such as the [[
An incomplete factorization instead seeks triangular matrices ''L'', ''U'' such that <math>A \approx LU</math> rather than <math>A = LU</math>. Solving for <math>LUx = b</math> can be done quickly but does not yield the exact solution to <math>Ax = b</math>. So, we instead use the matrix <math>M = LU</math> as a preconditioner in another iterative solution algorithm such as the [[conjugate gradient method]] or [[GMRES]].
Line 15 ⟶ 16:
G(A) := \left\lbrace (i,j) \in \N^2 : A_{ij} \neq 0 \right\rbrace \,,
</math>
which is used to define the conditions a ''sparsity
:<math>
S \subset \left\lbrace 1, \dots , n \right\rbrace^2
Line 30 ⟶ 31:
* <math> L,U </math> are zero outside of the sparsity pattern: <math> L_{ij}=U_{ij}=0 \quad \forall \; (i,j) \notin S </math>
* <math> R \in \R^{n \times n} </math> is zero within the sparsity pattern: <math> R_{ij}=0 \quad \forall \; (i,j) \in S </math>
is called an '''incomplete LU decomposition''' (
The sparsity pattern of ''L'' and ''U'' is often chosen to be the same as the sparsity pattern of the original matrix ''A''. If the underlying matrix structure can be referenced by pointers instead of copied, the only extra memory required is for the entries of ''L'' and ''U''. This preconditioner is called ILU(0).
== Stability ==
Concerning the stability of the ILU the following theorem was proven by Meijerink and van der Vorst.<ref>{{Cite journal|last=Meijerink|first=J. A.|last2=Vorst|first2=Van Der|last3=A|first3=H.|date=1977|title=An iterative solution method for linear systems of which the coefficient matrix is a symmetric 𝑀-matrix|url=https://www.ams.org/home/page/|journal=Mathematics of Computation|language=en-US|volume=31|issue=137|pages=148–162|doi=10.1090/S0025-5718-1977-0438681-4|issn=0025-5718|doi-access=free}}</ref>
Let <math> A </math> be an [[M-matrix]], the (complete) LU decomposition given by <math> A=\hat{L} \hat{U} </math>, and the ILU by <math> A=LU-R </math>.
Then
:<math>
|L_{ij}| \leq |\hat{L}_{ij}|
\quad \forall \; i,j
</math>
holds.
Thus, the ILU is at least as stable as the (complete) LU decomposition.
== Generalizations ==
Line 39 ⟶ 52:
More accurate ILU preconditioners require more memory, to such an extent that eventually the running time of the algorithm increases even though the total number of iterations decreases. Consequently, there is a cost/accuracy trade-off that users must evaluate, typically on a case-by-case basis depending on the family of linear systems to be solved.
== See also ==
* [[Incomplete Cholesky factorization]]
==References==
|