Content deleted Content added
No edit summary |
→Definition: minor copyedit |
||
(46 intermediate revisions by 36 users not shown) | |||
Line 1:
{{Short description|For a square matrix, the transpose of the cofactor matrix}}
In [[linear algebra]], the '''adjugate''' or '''classical adjoint''' of a [[square matrix]] {{math|'''A'''}}, {{math|adj('''A''')}}, is the [[transpose]] of its [[cofactor matrix]].<ref>{{cite book |first=F. R. |last=Gantmacher |author-link=Felix Gantmacher |title=The Theory of Matrices |volume=1 |publisher=Chelsea |___location=New York |year=1960 |isbn=0-8218-1376-5 |pages=76–89 |url=https://books.google.com/books?id=ePFtMw9v92sC&pg=PA76 }}</ref><ref>{{cite book |last=Strang |first=Gilbert |title=Linear Algebra and its Applications |publisher=Harcourt Brace Jovanovich |year=1988 |isbn=0-15-551005-3 |edition=3rd |pages=[https://archive.org/details/linearalgebraits00stra/page/231 231–232] |chapter=Section 4.4: Applications of determinants |author-link=Gilbert Strang |chapter-url=https://archive.org/details/linearalgebraits00stra/page/231 |chapter-url-access=registration}}</ref> It is occasionally known as '''adjunct matrix''',<ref>{{cite journal|author1=Claeyssen, J.C.R.|year=1990|title=On predicting the response of non-conservative linear vibrating systems by using dynamical matrix solutions|journal=Journal of Sound and Vibration|volume=140|issue=1|pages=73–84|doi=10.1016/0022-460X(90)90907-H|bibcode=1990JSV...140...73C }}</ref><ref>{{cite journal|author1=Chen, W.|author2=Chen, W.|author3=Chen, Y.J.|year=2004|title=A characteristic matrix approach for analyzing resonant ring lattice devices|journal=IEEE Photonics Technology Letters|volume=16|issue=2|pages=458–460|doi=10.1109/LPT.2003.823104|bibcode=2004IPTL...16..458C }}</ref> or "adjoint",<ref>{{cite book|first=Alston S.|last=Householder|title=The Theory of Matrices in Numerical Analysis |publisher=Dover Books on Mathematics|year=2006|author-link=Alston Scott Householder | isbn=0-486-44972-6 |pages=166–168 }}</ref> though that normally refers to a different concept, the [[Hermitian adjoint|adjoint operator]] which for a matrix is the [[conjugate transpose]].
The
:<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \det(\mathbf{
where {{math|'''I'''}} is the [[identity matrix]] of the same size as {{math|'''A'''}}. Consequently, the multiplicative inverse of an [[invertible matrix]] can be found by dividing its adjugate by its determinant.
== Definition ==
Line 7 ⟶ 10:
:<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T}.</math>
In more detail,
:<math>\mathbf{C} = \left((-1)^{i+j} \mathbf{M}_{ij}\right)_{1 \le i, j \le n}.</math>
The adjugate of {{math|'''A'''}} is the transpose of {{math|'''C'''}}, that is, the {{math|''n'' × ''n''}} matrix whose {{math|(''i'', ''j'')}} entry is the {{math|(''j'',''i'')}} cofactor of {{math|'''A'''}},
:<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \left((-1)^{i+j} \mathbf{M}_{ji}\right)_{1 \le i, j \le n}.</math>
=== Important consequence ===
▲The adjugate is defined as it is so that the product of {{math|'''A'''}} with its adjugate yields a [[diagonal matrix]] whose diagonal entries are the determinant {{math|det('''A''')}}. That is,
The adjugate is defined so that the product of {{math|'''A'''}} with its adjugate yields a [[diagonal matrix]] whose diagonal entries are the determinant {{math|det('''A''')}}. That is,
:<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \operatorname{adj}(\mathbf{A}) \mathbf{A} = \det(\mathbf{A}) \mathbf{I},</math>
where {{math|'''I'''}} is the {{math|''n'' × ''n''}} [[identity matrix]]. This is a consequence of the [[Laplace expansion]] of the determinant.
The above formula implies one of the fundamental results in matrix algebra, that {{math|'''A'''}} is [[
:<math>\begin{align}
\operatorname{adj}(\mathbf{A}) &= \det(\mathbf{A}) \mathbf{A}^{-1}, \\
Line 25 ⟶ 29:
=== 1 × 1 generic matrix ===
=== 2 × 2 generic matrix ===
The adjugate of the
:<math>\mathbf{A} = \begin{
</math>▼
is
:<math>\operatorname{adj}(\mathbf{A}) = \begin{
▲\operatorname{adj}(\mathbf{A}) = \begin{pmatrix} {{d}} & {{-b}}\\ {{-c}} & {{a}} \end{pmatrix}.
By direct computation,
:<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \begin{
In this case, it is also true that {{math|det}}({{math|adj}}('''A''')) = {{math|det}}('''A''') and hence that {{math|adj}}({{math|adj}}('''A''')) = '''A'''.
<!-- PLEASE DO NOT "CORRECT" WHAT IS NOT BROKEN. CHECK THE INVERSE FIRST. -->
=== 3 × 3 generic matrix ===
Consider a
▲\mathbf{A} = \begin{pmatrix}
\end{bmatrix}.</math>
▲a_{31} & a_{32} & a_{33}
Its cofactor matrix is
▲\mathbf{C} = \begin{pmatrix}
-\begin{vmatrix} a_{ & & \\▼
-\begin{vmatrix} a_{
▲+\begin{vmatrix} a_{11} & a_{13} \\ a_{31} & a_{33} \end{vmatrix} &
+\begin{vmatrix} a_{
\end{bmatrix},</math>
▲-\begin{vmatrix} a_{11} & a_{13} \\ a_{21} & a_{23} \end{vmatrix} &
where
= \det\!\begin{
Its adjugate is the transpose of its cofactor matrix,
:<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \begin{bmatrix}
▲\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \begin{pmatrix}
▲+\begin{vmatrix} a_{12} & a_{13} \\ a_{22} & a_{23} \end{vmatrix} \\
& & \\
-\begin{vmatrix}
+\begin{vmatrix} a_{
-\begin{vmatrix} a_{
& & \\
+\begin{vmatrix}
-\begin{vmatrix} a_{
+\begin{vmatrix} a_{
\end{
=== 3 × 3 numeric matrix ===
As a specific example, we have
:<math>\operatorname{adj}
-3 & 2 & -5 \\
-1 & 0 & -2 \\
3 & -4 & 1
\end{
-8 & 18 & -4 \\
-5 & 12 & -1 \\
4 & -6 & 2
\end{
It is easy to check the adjugate is the [[inverse matrix|inverse]] times the determinant, {{math|
▲It is easy to check the adjugate is the inverse times the determinant, {{math|−6}}.
The {{math|
:<math>\begin{
The (3,2) cofactor is a sign times the determinant of this submatrix:
:<math>(-1)^{3+2}\operatorname{det}\!\begin{
and this is the (2,3) entry of the adjugate.
== Properties ==
For any {{math|''n'' × ''n''}} matrix {{math|'''A'''}}, elementary computations show that adjugates
*
* <math>\operatorname{adj}(
* <math>\operatorname{adj}(c \mathbf{A}) = c^{n - 1}\operatorname{adj}(\mathbf{A})</math> for any scalar {{mvar|c}}.
* <math>\operatorname{adj}(\mathbf{A}^\mathsf{T}) = \operatorname{adj}(\mathbf{A})^\mathsf{T}</math>.
* <math>\det(\operatorname{adj}(\mathbf{A})) = (\det \mathbf{A})^{n-1}</math>.
* If {{math|'''A'''}} is invertible, then <math>\operatorname{adj}(\mathbf{A}) = (\det \mathbf{A}) \mathbf{A}^{-1}</math>. It follows that:
** {{math|adj('''A''')}} is invertible with inverse {{math|(det '''A''')<sup>
** {{math|1=adj('''A'''
* {{math|adj('''A''')}} is entrywise [[polynomial]] in {{math|'''A'''}}. In particular, over the [[real number|real]] or complex numbers, the adjugate is a [[smooth function]] of the entries of {{math|'''A'''}}.
Over the complex numbers,
* <math>\operatorname{adj}(\
* <math>\operatorname{adj}(\mathbf{A}^*) = \operatorname{adj}(\mathbf{A})^*</math>, where the asterisk denotes [[conjugate transpose]].
Suppose that {{math|'''B'''}} is another {{math|''n'' × ''n''}} matrix. Then
:<math>\operatorname{adj}(\mathbf{AB}) = \operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}).</math>
This can be [[mathematical proof|proved]] in three ways. One way, valid for any commutative ring, is a direct computation using the [[Cauchy–Binet formula]]. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices {{math|'''A'''}} and {{math|'''B'''}},
:<math>\operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}) = (\det \mathbf{B})\mathbf{B}^{-1}(\det \mathbf{A})\mathbf{A}^{-1} = (\det \mathbf{AB})(\mathbf{AB})^{-1} = \operatorname{adj}(\mathbf{AB}).</math>
Because every non-invertible matrix is the limit of invertible matrices, [[continuous function|continuity]] of the adjugate then implies that the formula remains true when one of {{math|'''A'''}} or {{math|'''B'''}} is not invertible.
A [[corollary]] of the previous formula is that, for any non-negative [[integer]] {{
:<math>\operatorname{adj}(\mathbf{A}^k) = \operatorname{adj}(\mathbf{A})^k.</math>
If {{math|'''A'''}} is invertible, then the above formula also holds for negative {{
From the identity
Line 142 ⟶ 131:
:<math>\mathbf{A}\operatorname{adj}(\mathbf{A} + \mathbf{B})\mathbf{B} = \mathbf{B}\operatorname{adj}(\mathbf{A} + \mathbf{B})\mathbf{A}.</math>
Suppose that {{math|'''A'''}} [[commuting matrices|commutes]] with {{math|'''B'''}}. Multiplying the identity {{math|1='''AB''' = '''BA'''}} on the left and right by {{math|adj('''A''')}} proves that
:<math>\det(\mathbf{A})\operatorname{adj}(\mathbf{A})\mathbf{B} = \det(\mathbf{A})\mathbf{B}\operatorname{adj}(\mathbf{A}).</math>
If {{math|'''A'''}} is invertible, this implies that {{math|adj('''A''')}} also commutes with {{math|'''B'''}}. Over the real or complex numbers, continuity implies that {{math|adj('''A''')}} commutes with {{math|'''B'''}} even when {{math|'''A'''}} is not invertible.
Finally, there is a more general proof than the second proof, which only requires that an
Using the above properties and other elementary computations, it is straightforward to show that if {{math|'''A'''}} has one of the following properties, then {{math|adj
* [[Upper triangular matrix|upper triangular]],
* [[Lower triangular matrix|lower triangular]],
* [[Diagonal matrix|diagonal]],
* [[Orthogonal matrix|orthogonal]],
* [[Unitary matrix|unitary]],
* [[Symmetric matrix|symmetric]],
* [[Hermitian matrix|Hermitian]],
* [[Normal matrix|normal]].
If {{math|'''A'''}} is [[Skew-symmetric matrix|skew-symmetric]], then {{math|adj('''A''')}} is skew-symmetric for even ''n'' and symmetric for odd ''n''. Similarly, if {{math|'''A'''}} is [[Skew-Hermitian matrix|skew-Hermitian]], then {{math|adj('''A''')}} is skew-Hermitian for even ''n'' and Hermitian for odd ''n''.
If {{math|'''A'''}} is invertible, then, as noted above, there is a formula for {{math|adj('''A''')}} in terms of the determinant and inverse of {{math|'''A'''}}. When {{math|'''A'''}} is not invertible, the adjugate satisfies different but closely related formulas.
* If {{math|1=rk('''A''')
* If {{math|1=rk('''A''') = ''n''
=== Column substitution and Cramer's rule ===
{{see also|Cramer's rule}}
Partition {{math|'''A'''}} into [[column
:<math>\mathbf{A} =
Let {{math|'''b'''}} be a column vector of size {{math|''n''}}. Fix {{math|1
:<math>(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\ \stackrel{\text{def}}{=}\ \begin{
Laplace expand the determinant of this matrix along column {{
:<math>\left(\det(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\right)_{i=1}^n = \operatorname{adj}(\mathbf{A})\mathbf{b}.</math>
This formula has the following concrete consequence. Consider the [[linear system of equations]]
:<math>\mathbf{A}\mathbf{x} = \mathbf{b}.</math>
Assume that {{math|'''A'''}} is [[singular matrix|non-singular]]. Multiplying this system on the left by {{math|adj('''A''')}} and dividing by the determinant yields
:<math>\mathbf{x} = \frac{\operatorname{adj}(\mathbf{A})\mathbf{b}}{\det \mathbf{A}}.</math>
Applying the previous formula to this situation yields '''Cramer's rule''',
:<math>x_i = \frac{\det(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})}{\det \mathbf{A}},</math>
where {{math|''x''<sub>''i''</sub>}} is the {{
=== Characteristic polynomial ===
Let the [[characteristic polynomial]] of {{math|'''A'''}} be
:<math>p(s) = \det(s\mathbf{I} - \mathbf{A}) = \sum_{i=0}^n p_i s^i \in R[s].</math>
The first [[divided difference]] of {{math|''p''}} is a [[symmetric polynomial]] of degree {{math|''n''
:<math>\Delta p(s, t) = \frac{p(s) - p(t)}{s - t} = \sum_{0 \le j + k < n} p_{j+k+1} s^j t^k \in R[s, t].</math>
Multiply {{math|''s'''''I'''
:<math>\operatorname{adj}(s\mathbf{I} - \mathbf{A}) = \Delta p(s\mathbf{I}, \mathbf{A}).</math>
Line 197 ⟶ 186:
=== Jacobi's formula ===
{{main|Jacobi's formula}}
The adjugate also appears in [[Jacobi's formula]] for the [[derivative]] of the
:<math>\frac{d(\det \mathbf{A})}{dt}(t) = \operatorname{tr}\left(\operatorname{adj}(\mathbf{A}(t)) \mathbf{A}'(t)\right).</math>
It follows that the [[total derivative]] of the determinant is the transpose of the adjugate:
:<math>d(\det \mathbf{A})_{\mathbf{A}_0} = \operatorname{adj}(\mathbf{A}_0)^{\mathsf{T}}.</math>
Line 206 ⟶ 195:
Let {{math|''p''<sub>'''A'''</sub>(''t'')}} be the characteristic polynomial of {{math|'''A'''}}. The [[Cayley–Hamilton theorem]] states that
:<math>p_{\mathbf{A}}(\mathbf{A}) = \mathbf{0}.</math>
Separating the constant term and multiplying the equation by {{math|adj('''A''')}} gives an expression for the adjugate that depends only on {{math|'''A'''}} and the coefficients of {{math|''p''<sub>'''A'''</sub>(''t'')}}. These coefficients can be explicitly represented in terms of [[trace (linear algebra)|traces]] of powers of {{math|'''A'''}} using complete exponential [[Bell polynomials]]. The resulting formula is
:<math>\operatorname{adj}(\mathbf{A}) = \sum_{s=0}^{n-1} \mathbf{A}^{s} \sum_{k_1, k_2, \ldots, k_{n-1}} \prod_{\ell=1}^{n-1} \frac{(-1)^{k_\ell+1}}{\ell^{k_\ell}k_{\ell}!}\operatorname{tr}(\mathbf{A}^\ell)^{k_\ell},</math>
where {{
:<math>s+\sum_{\ell=1}^{n-1}\ell k_\ell = n - 1.</math>
For the
:<math>\operatorname{adj}(\mathbf{A})=\mathbf{I}_2
For the
:<math>\operatorname{adj}(\mathbf{A})=\frac
For the
:<math>\operatorname{adj}(\mathbf{A})=
\frac{1}{6}\mathbf{I}_4\!\left(
(\operatorname{tr}\mathbf{A})^ - 3\operatorname{tr}\mathbf{A}\operatorname{tr}\mathbf{A}^ + 2\operatorname{tr}\mathbf{A}^{3} \right) - \frac{1}{2}\mathbf{A}\!\left( (\operatorname{tr}\mathbf{A})^
+ \mathbf{A}^2
- \mathbf{A}^
The same formula follows directly from the terminating step of the [[Faddeev–LeVerrier algorithm]], which efficiently determines the [[characteristic polynomial]] of {{math|'''A'''}}.
In general, adjugate matrix of arbitrary dimension N matrix can be computed by Einstein's convention.
:<math>(\operatorname{adj}(\mathbf{A}))_{i_N}^{j_N} = \frac{1}{(N-1)!} \epsilon_{i_1 i_2 \ldots i_N} \epsilon^{j_1 j_2 \ldots j_N} A_{j_1}^{i_1} A_{j_2}^{i_2} \ldots A_{j_{N-1}}^{i_{N-1}}
▲</math>
== Relation to exterior algebras ==
The adjugate can be viewed in abstract terms using [[exterior algebra]]s.
Abstractly, <math>\wedge^n V</math> is [[isomorphic]] to {{math|'''R'''}}, and under any such isomorphism the exterior product is a [[perfect pairing]]. That
Suppose that {{math|''T'' : ''V'' → ''V''}} is a [[linear transformation]].
If {{math|1=''V'' = '''R'''<sup>''n''</sup>}} is endowed with its
Fix a basis vector {{math|'''e'''<sub>''i''</sub>}} of {{math|'''R'''<sup>''n''</sup>}}.
= \begin{cases} (-1)^{i-1} \mathbf{e}_1 \wedge \dots \wedge \mathbf{e}_n, &\text{if}\ k = i, \\ 0 &\text{otherwise.} \end{cases}</math>
On basis vectors, the {{math|(''n''
Each of these terms maps to zero under <math>\phi_{\mathbf{e}_i}</math> except the {{math|1=''k'' = ''i''}} term.
Applying the inverse of <math>\phi</math> shows that the adjugate of {{math|''T''}} is the linear transformation for which
Consequently, its matrix representation is the adjugate of {{math|'''A'''}}.
If {{math|''V''}} is endowed with an [[inner product]] and a volume form, then the map {{math|''φ''}} can be decomposed further.
This induces an isomorphism
A vector {{math|'''v'''}} in {{math|'''R'''<sup>''n''</sup>}} corresponds to the linear functional
By the definition of the Hodge star operator, this linear functional is dual to {{math|*'''v'''}}.
== Higher adjugates ==
Let {{math|'''A'''}} be an {{math|''n'' × ''n''}} matrix, and fix {{math|''r'' ≥ 0}}. The '''{{math|''r''}}th higher adjugate''' of {{math|'''A'''}} is an <math display="inline">
:<math>(-1)^{\sigma(I) + \sigma(J)}\det \mathbf{A}_{J^c, I^c},</math>
where {{math|σ(''I'')}} and {{math|σ(''J'')}} are the sum of the elements of {{math|''I''}} and {{math|''J''}}, respectively.
Basic properties of higher adjugates include {{Citation needed|date=November 2023}}:
* {{math|1=adj<sub>0</sub>('''A''') = det '''A'''}}.
* {{math|1=adj<sub>1</sub>('''A''') = adj '''A'''}}.
Line 272 ⟶ 269:
== Iterated adjugates ==
[[Iterated function|Iteratively]] taking the adjugate of an invertible matrix '''A'''
:<math>\overbrace{\operatorname{adj}\dotsm\operatorname{adj}}^k(\mathbf{A})=\det(\mathbf{A})^{\frac{(n-1)^k-(-1)^k}n}\mathbf{A}^{(-1)^k}
:<math>\det(\overbrace{\operatorname{adj}\dotsm\operatorname{adj}}^k(\mathbf{A}))=\det(\mathbf{A})^{(n-1)^k}
For example,
Line 287 ⟶ 284:
* [[Jacobi's formula]]
* [[Faddeev–LeVerrier algorithm]]
* [[Compound matrix]]
== References ==
{{Reflist}}
== Bibliography ==
* Roger A. Horn and Charles R. Johnson (2013), ''Matrix Analysis'', Second Edition. Cambridge University Press, {{ISBN|978-0-521-54823-6}}
Line 297:
* [http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/property.html#adjoint Matrix Reference Manual]
*[http://www.elektro-energetika.cz/calculations/matreg.php?language=english Online matrix calculator (determinant, track, inverse, adjoint, transpose)] Compute Adjugate matrix up to order 8
* {{cite web
{{Matrix classes}}
|