Content deleted Content added
Computing and Visualization in Science |
m Moving Category:Matrices to Category:Matrices (mathematics) per Wikipedia:Categories for discussion/Speedy |
||
(4 intermediate revisions by 3 users not shown) | |||
Line 1:
{{Short description|Approximation method}}
In [[numerical mathematics]], '''hierarchical matrices (H-matrices)'''<ref name="HA99">{{cite journal|last=Hackbusch|first=Wolfgang|date=1999|title=A sparse matrix arithmetic based on H-matrices. Part I: Introduction to H-matrices|journal=Computing|volume=62|issue=2|pages=89–108|doi=10.1007/s006070050015}}</ref><ref name="GRHA02">{{cite journal|last=Grasedyck|first=Lars|last2=Hackbusch|first2=Wolfgang|date=2003|title=Construction and arithmetics of H-matrices|journal=Computing|volume=70|issue=4|pages=295–334|doi=10.1007/s00607-003-0019-1}}</ref><ref name="HA09">{{cite book|last=Hackbusch|first=Wolfgang|date=2015|title=Hierarchical matrices: Algorithms and Analysis|volume=49|publisher=Springer|doi=10.1007/978-3-662-47324-5|series=Springer Series in Computational Mathematics|isbn=978-3-662-47323-8}}</ref>▼
are used as data-sparse approximations of non-sparse matrices. While a [[sparse matrix]] of dimension <math>n</math> can be represented efficiently in <math>O(n)</math> units of storage by storing only its non-zero entries, a non-sparse matrix would require <math>O(n^2)</math> units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only <math>O(n k\,\log(n))</math> units of storage, where <math>k</math> is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations,<ref name="MB08">{{cite book|last=Bebendorf|first=Mario|date=2008|title=Hierarchical matrices: A means to efficiently solve elliptic boundary value problems|publisher=Springer}}</ref><ref name="HAKH00">{{cite journal|last=Hackbusch|first=Wolfgang|last2=Khoromskij|first2=Boris N.|date=2000|title=A sparse H-Matrix Arithmetic. Part II: Application to Multi-Dimensional Problems|journal=Computing|volume=64|pages=21–47}}</ref><ref name="MB00">{{cite journal|last=Bebendorf|first=Mario|title=Approximation of boundary element matrices|date=2000|journal=Numer. Math.|volume=86|issue=4|pages=565–589|doi=10.1007/pl00005410}}</ref><ref name="BERJ03">{{cite journal|last=Bebendorf|first=Mario|last2=Rjasanow|first2=Sergej|date=2003|title=Adaptive low-rank approximation of collocation matrices|journal=Computing|volume=70|pages=1–24|doi=10.1007/s00607-002-1469-6|citeseerx=10.1.1.133.182}}</ref><ref name="BOGR05">{{cite journal|last=Börm|first=Steffen|last2=Grasedyck|first2=Lars|date=2005|title=Hybrid cross approximation of integral operators|journal=Numer. Math.|volume=101|issue=2|pages=221–249|doi=10.1007/s00211-005-0618-1|citeseerx=10.1.1.330.8950}}</ref><ref name="BOCH16">{{cite journal|last=Börm|first=Steffen|last2=Christophersen|first2=Sven|date=2016|title=Approximation of integral operators by Green quadrature and nested cross approximation|journal=Numer. Math.|volume=133|issue=3|pages=409–442|doi=10.1007/s00211-015-0757-y|arxiv=1404.2234}}</ref>▼
▲In [[numerical mathematics]], '''hierarchical matrices (H-matrices)'''<ref name="HA99">{{cite journal|last=Hackbusch|first=Wolfgang|date=1999|title=A sparse matrix arithmetic based on H-matrices. Part I: Introduction to H-matrices|journal=Computing|volume=62|issue=2|pages=89–108|doi=10.1007/s006070050015|s2cid=24294140 }}</ref><ref name="GRHA02">{{cite journal|
preconditioning the resulting systems of linear equations,<ref name="FAMEPR16">{{cite journal|last=Faustmann|first=Markus|last2=Melenk|first2=J. Markus|last3=Praetorius|first3=Dirk|date=2016|title=Existence of H-matrix approximants to the inverses of BEM matrices: The simple-layer operator|journal=Math. Comp.|volume=85|issue=297|pages=119–152|doi=10.1090/mcom/2990|arxiv=1311.5028}}</ref>▼
▲are used as data-sparse approximations of non-sparse matrices. While a [[sparse matrix]] of dimension <math>n</math> can be represented efficiently in <math>O(n)</math> units of storage by storing only its non-zero entries, a non-sparse matrix would require <math>O(n^2)</math> units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only <math>O(n k\,\log(n))</math> units of storage, where <math>k</math> is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations,<ref name="MB08">{{cite book|last=Bebendorf|first=Mario|date=2008|title=Hierarchical matrices: A means to efficiently solve elliptic boundary value problems|publisher=Springer}}</ref><ref name="HAKH00">{{cite journal|
or solving elliptic partial differential equations,<ref name="BEHA03">{{cite journal|last=Bebendorf|first=Mario|last2=Hackbusch|first2=Wolfgang|date=2003|title=Existence of H-matrix approximants to the inverse FE-matrix of elliptic operators with <math>L^\infty</math>-coefficients|journal=Numer. Math.|volume=95|pages=1–28|doi=10.1007/s00211-002-0445-6}}</ref><ref name="BO10">{{cite journal|last=Börm|first=Steffen|date=2010|title=Approximation of solution operators of elliptic partial differential equations by H- and H<sup>2</sup>-matrices|journal=Numer. Math.|volume=115|issue=2|pages=165–193|doi=10.1007/s00211-009-0278-7}}</ref><ref name ="FAMEPR13">{{cite journal|last=Faustmann|first=Markus|last2=Melenk|first2=J. Markus|last3=Praetorius|first3=Dirk|date=2015|title=H-matrix approximability of the inverses of FEM matrices|journal=Numer. Math.|volume=131|issue=4|pages=615–642|doi=10.1007/s00211-015-0706-9|arxiv=1308.0499}}</ref><ref name ="SWX16">{{cite journal|last=Shen|first=Jie|last2=Wang|first2=Yingwei|last3=Xia|first3=Jianlin|date=2016|title=Fast structured direct spectral methods for differential equations with variable coefficients|journal= SIAM Journal on Scientific Computing|volume=38|issue=1|pages=A28–A54|doi=10.1137/140986815}}</ref> a rank proportional to <math>\log(1/\epsilon)^\gamma</math> with a small constant <math>\gamma</math> is sufficient to ensure an accuracy of <math>\epsilon</math>. Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in <math>O(n k^\alpha\,\log(n)^\beta)</math> operations, where <math>\alpha,\beta\in\{1,2,3\}.</math><ref name="GRHA02"/>▼
▲preconditioning the resulting systems of linear equations,<ref name="FAMEPR16">{{cite journal|
▲or solving elliptic partial differential equations,<ref name="BEHA03">{{cite journal|
== Basic idea ==
Line 54 ⟶ 56:
would also allow us to split the double integral into two single integrals and thus arrive at a similar factorized low-rank matrix.
Of particular interest are cross approximation techniques<ref name="MB00"/><ref name="BERJ03"/><ref name="TY00">{{cite journal|last=Tyrtyshnikov|first=Eugene|date=2000|title=Incomplete cross approximation in the mosaic-skeleton method|journal=Computing|volume=64|issue=4|pages=367–380|doi=10.1007/s006070070031|citeseerx=10.1.1.100.6153|s2cid=15850058 }}</ref>
that use only the entries of the original matrix <math>G</math> to construct a [[low rank approximation|low-rank approximation]].
Line 75 ⟶ 77:
<math>O(n k^2\,\log(n)^2)</math> operations.
Taking advantage of the block structure, the inverse can be computed by using recursion to compute inverses and [[Schur complement]]s of diagonal blocks and combining both using the matrix-matrix multiplication. In a similar way, the [[LU decomposition]]<ref name="BE07">{{cite journal|last=Bebendorf|first=Mario|date=2007|title=Why finite element discretizations can be factored by triangular hierarchical matrices|journal=SIAM J. Numer. Anal.|volume=45|issue=4|pages=1472–1494|doi=10.1137/060669747}}</ref><ref name="GRKRBO09">{{cite journal|
can be constructed using only recursion and multiplication.
Both operations also require <math>O(n k^2\,\log(n)^2)</math> operations.
Line 81 ⟶ 83:
== H<sup>2</sup>-matrices ==
In order to treat very large problems, the structure of hierarchical matrices can be improved:
H<sup>2</sup>-matrices<ref name="HAKHSA02">{{cite book|
replace the general low-rank structure of the blocks by a hierarchical representation closely related to the [[fast multipole method]] in order to reduce the storage complexity to <math>O(n k)</math>.
In the context of boundary integral operators, replacing the fixed rank <math>k</math> by block-dependent ranks leads to approximations that preserve the rate of convergence of the underlying boundary element method at a complexity of <math>O(n).</math><ref name="SA00">{{cite journal|last=Sauter|first=Stefan|date=2000|title=Variable order panel clustering|journal=Computing|volume=64|issue=3|pages=223–261|doi=10.1007/s006070050045|s2cid=36813444 }}</ref><ref name="BOSA05">{{cite journal|
Arithmetic operations like multiplication, inversion, and Cholesky or LR factorization of H<sup>2</sup>-matrices
can be implemented based on two fundamental operations: the matrix-vector multiplication with submatrices
and the low-rank update of submatrices. While the matrix-vector multiplication is straightforward, implementing efficient low-rank updates with adaptively optimized cluster bases poses a significant challenge.<ref name="HARE14">{{cite journal|
== Literature ==
Line 107 ⟶ 109:
[https://github.com/JuliaMatrices/HierarchicalMatrices.jl HierarchicalMatrices.jl] is a Julia package implementing hierarchical matrices.
[[Category:Matrices (mathematics)]]
|