Content deleted Content added
→Properties: Presumed intent |
m Dating maintenance tags: {{More footnotes needed}} |
||
(47 intermediate revisions by 22 users not shown) | |||
Line 1:
{{Use American English|date = March 2019}}
{{Short description|Matrix whose only nonzero elements are on its main diagonal}}
{{More footnotes needed|date=June 2025}}
In [[linear algebra]], a '''diagonal matrix''' is a [[matrix (mathematics)|matrix]] in which the entries outside the [[main diagonal]] are all zero; the term usually refers to [[square matrices]]. An example of a 2-by-2 diagonal matrix is <math>\left[\begin{smallmatrix}▼
▲In [[linear algebra]], a '''diagonal matrix''' is a [[matrix (mathematics)|matrix]] in which the entries outside the [[main diagonal]] are all zero; the term usually refers to [[square matrices]]. Elements of the main diagonal can either be zero or nonzero. An example of a
3 & 0 \\
0 & 2 \end{smallmatrix}\right]</math>, while an example of a
\left[\begin{smallmatrix}
6 & 0 & 0 \\
0 &
0 & 0 & 4
\end{smallmatrix}\right]</math>. An [[identity matrix]] of any size, or any multiple of it
0.5 & 0 \\
0 & 0.5 \end{smallmatrix}\right]</math>.
==Definition==
As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero.
▲:<math>\forall i,j \in \{1, 2, \ldots, n\}, i \ne j \implies d_{i,j} = 0</math>.
However, the main diagonal entries are unrestricted.
The term ''diagonal matrix'' may sometimes refer to a '''{{visible anchor|rectangular diagonal matrix}}''', which is an
1 & 0 & 0\\
0 & 4 & 0\\
0 & 0 & -3\\
0 & 0 & 0\\
\end{bmatrix}
1 & 0 & 0 & 0 & 0\\
0 & 4 & 0& 0 & 0\\
0 & 0 & -3& 0 & 0
\end{bmatrix}</math> More often, however, ''diagonal matrix'' refers to square matrices, which can be specified explicitly as a '''{{visible anchor|square diagonal matrix}}'''. A square diagonal matrix is a [[symmetric matrix]], so this can also be called a '''{{visible anchor|symmetric diagonal matrix}}'''.
The following matrix is square diagonal matrix:
1 & 0 & 0\\
0 & 4 & 0\\
0 & 0 & -2
\end{bmatrix}</math> If the entries are [[real numbers]] or [[complex numbers]], then it is a [[normal matrix]] as well.
In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices".
==Vector-to-matrix diag operator==
A diagonal matrix {{math|'''D'''}} can be constructed from a vector <math>\mathbf{a} = \begin{bmatrix}a_1 & \dots & a_n\end{bmatrix}^\textsf{T}</math> using the <math>\operatorname{diag}</math> operator:
<math display="block">
\mathbf{D} = \operatorname{diag}(a_1, \dots, a_n).
</math>
This may be written more compactly as <math>\mathbf{D} = \operatorname{diag}(\mathbf{a})</math>.
The same operator is also used to represent [[Block matrix#Block diagonal matrices|block diagonal matrices]] as <math>\mathbf{A} = \operatorname{diag}(\mathbf A_1, \dots, \mathbf A_n)</math> where each argument {{math|'''A'''{{sub|''i''}}}} is a matrix.
The {{math|diag}} operator may be written as
<math display="block">
\operatorname{diag}(\mathbf{a}) = \left(\mathbf{a} \mathbf{1}^\textsf{T}\right) \circ \mathbf{I},
</math>
where <math>\circ</math> represents the [[Hadamard product (matrices)|Hadamard product]], and {{math|'''1'''}} is a constant vector with elements 1.
==Matrix-to-vector diag operator==
The inverse matrix-to-vector {{math|diag}} operator is sometimes denoted by the identically named <math>\operatorname{diag}(\mathbf{D}) = \begin{bmatrix}a_1 & \dots & a_n\end{bmatrix}^\textsf{T},</math> where the argument is now a matrix, and the result is a vector of its diagonal entries.
The following property holds:
<math display="block">
\operatorname{diag}(\mathbf{A}\mathbf{B}) = \sum_j \left(\mathbf{A} \circ \mathbf{B}^\textsf{T}\right)_{ij} = \left( \mathbf{A} \circ \mathbf{B}^\textsf{T} \right) \mathbf{1}.
</math>
== Scalar matrix ==
<!-- Linked from [[Scalar matrix]] and [[Scalar transformation]] -->
A diagonal matrix with equal diagonal entries is a '''scalar matrix'''; that is, a scalar multiple
\begin{bmatrix}
\lambda & 0 & 0 \\
Line 55 ⟶ 84:
</math>
The scalar matrices are the [[center of an algebra|center]] of the algebra of matrices: that is, they are precisely the matrices that [[commute (mathematics)|commute]] with all other square matrices of the same size.{{efn|Proof: given the [[elementary matrix]] <math>e_{ij}</math>, <math>Me_{ij}</math> is the matrix with only the ''i''-th row of ''M'' and <math>e_{ij}M</math> is the square matrix with only the ''M'' ''j''-th column, so the non-diagonal entries must be zero, and the ''i''th diagonal entry much equal the ''j''th diagonal entry.}} By contrast, over a [[
For an abstract vector space
== Vector operations ==
Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix <math>\mathbf{D} = \operatorname{diag}(a_1, \dots, a_n)</math> and a vector <math>\mathbf{v} = \begin{bmatrix} x_1 & \dotsm & x_n \end{bmatrix}^\textsf{T}</math>, the product is:
\begin{bmatrix}
a_1 \\
Line 71 ⟶ 100:
</math>
This can be expressed more compactly by using a vector instead of a diagonal matrix, <math>\mathbf{d} = \begin{bmatrix} a_1 & \dotsm & a_n \end{bmatrix}^\textsf{T}</math>, and taking the [[Hadamard product (matrices)|Hadamard product]] of the vectors (entrywise product), denoted <math>\mathbf{d} \circ \mathbf{v}</math>:
\begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} \circ \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} =
\begin{bmatrix} a_1 x_1 \\ \vdots \\ a_n x_n \end{bmatrix}.
</math>
This is mathematically equivalent, but avoids storing all the zero terms of this [[sparse matrix]]. This product is thus used in [[machine learning]], such as computing products of derivatives in [[backpropagation]] or multiplying IDF weights in [[TF-IDF]],<ref>{{cite book |last=Sahami |first=Mehran |date=2009-06-15 |title=Text Mining: Classification, Clustering, and Applications |url=https://
== Matrix operations ==
The operations of matrix addition and [[matrix multiplication]] are especially simple for diagonal matrices. Write {{
<math display=block>
\operatorname{diag}(a_1,\, \ldots,\, a_n) + \operatorname{diag}(b_1,\, \ldots,\, b_n) = \operatorname{diag}(a_1 + b_1,\, \ldots,\, a_n + b_n)</math>
and for [[matrix multiplication]],
<math display=block>\operatorname{diag}(a_1,\, \ldots,\, a_n) \operatorname{diag}(b_1,\, \ldots,\, b_n) = \operatorname{diag}(a_1 b_1,\, \ldots,\, a_n b_n).</math>
The diagonal matrix {{
<math display=block>\operatorname{diag}(a_1,\, \ldots,\, a_n)^{-1} = \operatorname{diag}(a_1^{-1},\, \ldots,\, a_n^{-1}).</math>
In particular, the diagonal matrices form a [[subring]] of the ring of all
Multiplying an
== Operator matrix in eigenbasis ==
{{Main|
As explained in [[transformation matrix#Finding the matrix of a transformation|determining coefficients of operator matrix]], there is a special basis, {{math|'''e'''<sub>1</sub>, ..., '''e'''<sub>''n''</sub>}}, for which the matrix
▲{{Main|Transformation_matrix#Finding the matrix of a transformation|Eigenvalues and eigenvectors|l1=Finding the matrix of a transformation}}
▲As explained in [[transformation matrix#Finding the matrix of a transformation|determining coefficients of operator matrix]], there is a special basis, ''e''<sub>1</sub>, ..., ''e''<sub>''n''</sub>, for which the matrix <math>A</math> takes the diagonal form. Hence, in the defining equation <math>A \vec e_j = \sum a_{i,j} \vec e_i</math>, all coefficients <math>a_{i,j} </math> with ''i'' ≠ ''j'' are zero, leaving only one term per sum. The surviving diagonal elements, <math>a_{i,i}</math>, are known as '''eigenvalues''' and designated with <math>\lambda_i</math> in the equation, which reduces to <math>A \vec e_i = \lambda_i \vec e_i</math>. The resulting equation is known as '''eigenvalue equation'''<ref>{{cite book |last=Nearing |first=James |year=2010 |title=Mathematical Tools for Physics |url=http://www.physics.miami.edu/nearing/mathmethods |chapter=Chapter 7.9: Eigenvalues and Eigenvectors |chapter-url= http://www.physics.miami.edu/~nearing/mathmethods/operators.pdf |access-date=January 1, 2012|isbn=048648212X}}</ref> and used to derive the [[characteristic polynomial]] and, further, [[eigenvalues and eigenvectors]].
In other words, the [[eigenvalue]]s of {{
== Properties ==
* The [[determinant]] of {{
* The [[adjugate]] of a diagonal matrix is again diagonal.
* Where all matrices are square,
Line 113 ⟶ 142:
** A matrix is diagonal if and only if it is both [[triangular matrix|upper-]] and [[triangular matrix|lower-triangular]].
** A diagonal matrix is [[symmetric matrix|symmetric]].
* The [[identity matrix]] {{math|'''I'''<sub>''n''</sub>}} and [[zero matrix]] are diagonal.
* A 1×1 matrix is always diagonal.
* The square of a 2×2 matrix with zero [[trace (linear algebra)|trace]] is always diagonal.
== Applications ==
Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or [[linear operator|linear map]] by a diagonal matrix.
In fact, a given
Over the [[field (mathematics)|field]] of [[real number|real]] or [[complex number|complex]] numbers, more is true. The [[spectral theorem]] says that every [[normal matrix]] is [[matrix similarity|unitarily similar]] to a diagonal matrix (if {{math|1='''AA'''<sup>∗</sup> = '''A'''<sup>∗</sup>'''A'''}} then there exists a [[unitary matrix]] {{math|'''U'''}} such that {{math|'''UAU'''<sup>∗</sup>}} is diagonal). Furthermore, the [[singular value decomposition]] implies that for any matrix {{math|'''A'''}}, there exist unitary matrices {{math|'''U'''}} and {{math|'''V'''}} such that {{math|''
== Operator theory ==
Line 150 ⟶ 180:
== Sources ==
*{{Citation|last1=Horn|first1=Roger Alan|title=Matrix Analysis|year=1985|publisher=[[Cambridge University Press]]| isbn=978-0-521-38632-6|last2=Johnson|first2=Charles Royal|author-link=Roger Horn|authorlink2=Charles Royal Johnson}}
{{Matrix classes}}
|