Content deleted Content added
m Replace template per TFD |
→Banded: Description of banded matrix in plain language |
||
(15 intermediate revisions by 14 users not shown) | |||
Line 17:
In [[numerical analysis]] and [[scientific computing]], a '''sparse matrix''' or '''sparse array''' is a [[matrix (mathematics)|matrix]] in which most of the elements are zero.<ref name="Yan Wu Liu Gao 2017 p. ">{{cite conference | last1=Yan | first1=Di | last2=Wu | first2=Tao | last3=Liu | first3=Ying | last4=Gao | first4=Yang | title=2017 IEEE 17th International Conference on Communication Technology (ICCT) | chapter=An efficient sparse-dense matrix multiplication on a multicore system | publisher=IEEE | year=2017 | pages=1880–3 | isbn=978-1-5090-3944-9 | doi=10.1109/icct.2017.8359956 | quote=The computation kernel of DNN is large sparse-dense matrix multiplication. In the field of numerical analysis, a sparse matrix is a matrix populated primarily with zeros as elements of the table. By contrast, if the number of non-zero elements in a matrix is relatively large, then it is commonly considered a dense matrix. The fraction of zero elements (non-zero elements) in a matrix is called the sparsity (density). Operations using standard dense-matrix structures and algorithms are relatively slow and consume large amounts of memory when applied to large sparse matrices. }}</ref> There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as '''sparse''' but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered '''dense'''.<ref name="Yan Wu Liu Gao 2017 p. "/> The number of zero-valued elements divided by the total number of elements (e.g., ''m'' × ''n'' for an ''m'' × ''n'' matrix) is sometimes referred to as the '''sparsity''' of the matrix.
Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system, as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in [[combinatorics]] and application areas such as [[network theory]] and [[numerical analysis]], which typically have a low density of significant data or connections. Large sparse matrices often appear in [[scientific]] or [[engineering]] applications when solving [[partial differential equation]]s.
When storing and manipulating sparse matrices on a [[computer]], it is beneficial and often necessary to use specialized [[algorithm]]s and [[data structure]]s that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices,<ref>{{Cite web|url=https://www.businesswire.com/news/home/20190819005148/en/Cerebras-Systems-Unveils-Industry%E2%80%99s-Trillion-Transistor-Chip|title=Cerebras Systems Unveils the Industry's First Trillion Transistor Chip| quote=The WSE contains 400,000 AI-optimized compute cores. Called SLAC™ for Sparse Linear Algebra Cores, the compute cores are flexible, programmable, and optimized for the sparse linear algebra that underpins all neural network computation|date=2019-08-19 |website=www.businesswire.com|language=en|access-date=2019-12-02}}</ref> as they are common in the machine learning field.<ref>{{Cite press release|url=https://www.anl.gov/article/argonne-national-laboratory-deploys-cerebras-cs1-the-worlds-fastest-artificial-intelligence-computer|title=Argonne National Laboratory Deploys Cerebras CS-1, the World's Fastest Artificial Intelligence Computer {{!}} Argonne National Laboratory|quote=The WSE is the largest chip ever made at 46,225 square millimeters in area, it is 56.7 times larger than the largest graphics processing unit. It contains 78 times more AI optimized compute cores, 3,000 times more high speed, on-chip memory, 10,000 times more memory bandwidth, and 33,000 times more communication bandwidth.| website=www.anl.gov|language=en|access-date=2019-12-02}}</ref> Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and [[Computer memory|memory]] are wasted on the zeros. Sparse data is by nature more easily [[data compression|compressed]] and thus requires significantly less [[computer data storage|storage]]. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.
Line 25:
===Banded===
{{main article|Band matrix}}
Formally, <math display="block">\begin{bmatrix}
X & X & X & \cdot & \cdot & \cdot & \cdot & \\
Line 41 ⟶ 43:
====Diagonal====
A
===Symmetric===
Line 169 ⟶ 171:
* [[ARPACK]] Fortran 77 library for sparse matrix diagonalization and manipulation, using the Arnoldi algorithm
* [[SLEPc]] Library for solution of large scale linear systems and sparse matrices
* [[scikit-learn]], a Python library for [[machine learning]], provides support for sparse matrices and solvers
* [https://docs.julialang.org/en/v1/stdlib/SparseArrays/ SparseArrays] is a [[Julia (programming language)|Julia]] standard library.
* [[PSBLAS]], software toolkit to solve sparse linear systems supporting multiple formats also on GPU.
==History==
Line 198 ⟶ 202:
* {{Cite book |last=Pissanetzky|first= Sergio|year= 1984|title=Sparse Matrix Technology|url=https://archive.org/details/sparsematrixtech0000piss|url-access=registration|publisher= Academic Press |isbn=978-0-12-557580-5 |oclc=680489638 }}
*{{cite journal|doi=10.1007/BF02521587|title=Reducing the profile of sparse symmetric matrices|year=1976|last1=Snay|first1=Richard A.|journal=[[Bulletin Géodésique]]|volume=50|issue=4|pages=341–352|bibcode=1976BGeod..50..341S|hdl=2027/uc1.31210024848523|s2cid=123079384|hdl-access=free}} Also NOAA Technical Memorandum NOS NGS-4, National Geodetic Survey, Rockville, MD. Referencing {{harvnb|Saad|2003}}.
* {{cite book |
{{refend}}
==Further reading==
{{refbegin}}
* {{cite journal | title = A comparison of several bandwidth and profile reduction algorithms | journal = ACM Transactions on Mathematical Software | year = 1976 | volume = 2 | issue = 4 | pages = 322–330 | url = http://portal.acm.org/citation.cfm?id=355707 | doi = 10.1145/355705.355707 | last1 = Gibbs | first1 = Norman E. | last2 = Poole | first2 = William G. | last3 = Stockmeyer | first3 = Paul K. | s2cid = 14494429 | url-access = subscription }}
* {{cite journal | title = Sparse matrices in MATLAB: Design and Implementation | journal = SIAM Journal on Matrix Analysis and Applications | year = 1992 | volume = 13 | issue = 1 | pages = 333–356 | url = http://citeseer.ist.psu.edu/gilbert91sparse.html | doi = 10.1137/0613024 | last1 = Gilbert | first1 = John R. | last2 = Moler | first2 = Cleve | last3 = Schreiber | first3 = Robert | citeseerx = 10.1.1.470.1054 }}
* [http://faculty.cse.tamu.edu/davis/research.html Sparse Matrix Algorithms Research] at the Texas A&M University.
* [https://sparse.tamu.edu/ SuiteSparse Matrix Collection]
* [http://www.small-project.eu SMALL project] A EU-funded project on sparse models, algorithms and dictionary learning for large-scale data.
* {{cite book |first=Wolfgang |last=Hackbusch |title=Iterative Solution of Large Sparse Systems of Equations |series=Applied Mathematical Sciences |publisher=Springer |date=2016 |volume=95 |doi=10.1007/978-3-319-28483-5 |edition=2nd|isbn=978-3-319-28481-1 }}
* {{cite book |first=Yousef |last=Saad |title=Iterative Methods for Sparse Linear Systems |publisher=SIAM |date=2003 |isbn=978-0-89871-534-7 |doi=10.1137/1.9780898718003 |oclc=693784152 }}
* {{cite book |first=Timothy A. |last=Davis |title=Direct Methods for Sparse Linear Systems |publisher=SIAM |date=2006 |isbn=978-0-89871-613-9 |doi=10.1137/1.9780898718881 |oclc=694087302 }}
|