Adjugate matrix: Difference between revisions

Content deleted Content added
ACL-FMD (talk | contribs)
1 × 1 generic matrix: : the answer was incorrect for the 0-matrix.
Definition: minor copyedit
 
(20 intermediate revisions by 19 users not shown)
Line 1:
{{Short description|For a square matrix, the transpose of the cofactor matrix}}
In [[linear algebra]], the '''adjugate''' or '''classical adjoint''' of a [[square matrix]] {{math|'''A'''}}, {{math|adj('''A''')}}, is the [[transpose]] of its [[cofactor matrix]] and is denoted by {{math|adj('''A''')}}.<ref>{{cite book |first=F. R. |last=Gantmacher |author-link=Felix Gantmacher |title=The Theory of Matrices |volume=1 |publisher=Chelsea |___location=New York |year=1960 |isbn=0-8218-1376-5 |pages=76–89 |url=https://books.google.com/books?id=ePFtMw9v92sC&pg=PA76 }}</ref><ref>{{cite book |last=Strang |first=Gilbert |title=Linear Algebra and its Applications |publisher=Harcourt Brace Jovanovich |year=1988 |isbn=0-15-551005-3 |edition=3rd |pages=[https://archive.org/details/linearalgebraits00stra/page/231 231–232] |chapter=Section 4.4: Applications of determinants |author-link=Gilbert Strang |chapter-url=https://archive.org/details/linearalgebraits00stra/page/231 |chapter-url-access=registration}}</ref> It is also occasionally known as '''adjunct matrix''',<ref>{{cite journal|author1=Claeyssen, J.C.R.|year=1990|title=On predicting the response of non-conservative linear vibrating systems by using dynamical matrix solutions|journal=Journal of Sound and Vibration|volume=140|issue=1|pages=73–84|doi=10.1016/0022-460X(90)90907-H|bibcode=1990JSV...140...73C }}</ref><ref>{{cite journal|author1=Chen, W.|author2=Chen, W.|author3=Chen, Y.J.|year=2004|title=A characteristic matrix approach for analyzing resonant ring lattice devices|journal=IEEE Photonics Technology Letters|volume=16|issue=2|pages=458–460|doi=10.1109/LPT.2003.823104|bibcode=2004IPTL...16..458C }}</ref> or "adjoint",<ref>{{cite book|first=Alston S.|last=Householder|title=The Theory of Matrices in Numerical Analysis |publisher=Dover Books on Mathematics|year=2006|author-link=Alston Scott Householder | isbn=0-486-44972-6 |pages=166–168 }}</ref> though the latter todaythat normally refers to a different concept, the [[Hermitian adjoint|adjoint operator]] which for a matrix is the [[conjugate transpose]] of the matrix.
 
The product of a matrix with its adjugate gives a [[diagonal matrix]] (entries not on the main diagonal are zero) whose diagonal entries are the [[determinant]] of the original matrix:
Line 10:
:<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T}.</math>
 
In more detail, suppose {{math|''R''}} is a ([[Unital algebra|unital]]) [[commutative ring]] and {{math|'''A'''}} is an {{math|''n''&thinsp; ×&thinsp; ''n''}} matrix with entries from {{math|''R''}}. The {{math|(''i'', ''j'')}}-''[[minor (linear algebra)|minor]]'' of {{math|'''A'''}}, denoted {{math|'''M'''<sub>''ij''</sub>}}, is the [[determinant]] of the {{math|(''n''&nbsp;−&nbsp;1)&thinsp; ×&thinsp; (''n''&nbsp;−&nbsp;1)}} matrix that results from deleting row {{mvar|i}} and column {{mvar|j}} of {{math|'''A'''}}. The [[Cofactor (linear algebra)#Inverse of a matrix|cofactor matrix]] of {{math|'''A'''}} is the {{math|''n''&thinsp; ×&thinsp; ''n''}} matrix {{math|'''C'''}} whose {{math|(''i'', ''j'')}} entry is the {{math|(''i'', ''j'')}} ''[[cofactor (linear algebra)|cofactor]]'' of {{math|'''A'''}}, which is the {{math|(''i'', ''j'')}}-minor times a sign factor:
:<math>\mathbf{C} = \left((-1)^{i+j} \mathbf{M}_{ij}\right)_{1 \le i, j \le n}.</math>
The adjugate of {{math|'''A'''}} is the transpose of {{math|'''C'''}}, that is, the {{math|''n''&thinsp; ×&thinsp; ''n''}} matrix whose {{math|(''i'', ''j'')}} entry is the {{math|(''j'',&hairsp;''i'')}} cofactor of {{math|'''A'''}},
:<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \left((-1)^{i+j} \mathbf{M}_{ji}\right)_{1 \le i, j \le n}.</math>
 
Line 18:
The adjugate is defined so that the product of {{math|'''A'''}} with its adjugate yields a [[diagonal matrix]] whose diagonal entries are the determinant {{math|det('''A''')}}. That is,
:<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \operatorname{adj}(\mathbf{A}) \mathbf{A} = \det(\mathbf{A}) \mathbf{I},</math>
where {{math|'''I'''}} is the {{math|''n''&thinsp; ×&thinsp; ''n''}} [[identity matrix]]. This is a consequence of the [[Laplace expansion]] of the determinant.
 
The above formula implies one of the fundamental results in matrix algebra, that {{math|'''A'''}} is [[invertible matrix|invertible]] [[if and only if]] {{math|det('''A''')}} is an [[unit (ring theory)|invertible element]] of {{math|''R''}}. When this holds, the equation above yields
Line 28:
== Examples ==
 
=== 1&thinsp; ×&thinsp; 1 generic matrix ===
Since the determinant of a 0 x× 0 matrix is 1, the adjugate of any 1&thinsp; ×&thinsp; 1 matrix ([[complex number|complex]] scalar) is <math>\mathbf{I} = \begin{bmatrix} 1 \end{bmatrix}</math>. Observe that <math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \operatorname{adj}(\mathbf{A} )\mathbf{IA} = (\det \mathbf{A}) \mathbf {I}.</math>
 
=== 2&thinsp; ×&thinsp; 2 generic matrix ===
The adjugate of the 2&thinsp; ×&thinsp; 2 matrix
:<math>\mathbf{A} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}</math>
is
Line 41:
<!-- PLEASE DO NOT "CORRECT" WHAT IS NOT BROKEN. CHECK THE INVERSE FIRST. -->
 
=== 3&thinsp; ×&thinsp; 3 generic matrix ===
Consider a 3&thinsp; ×&thinsp; 3 matrix
:<math>\mathbf{A} = \begin{bmatrix}
a_{111} & a_{122} & a_{133} \\
a_b_{211} & a_b_{222} & a_b_{233} \\
a_c_{311} & a_c_{322} & a_c_{333}
\end{bmatrix}.</math>
Its cofactor matrix is
:<math>\mathbf{C} = \begin{bmatrix}
+\begin{vmatrix} a_b_{222} & a_b_{233} \\ a_c_{322} & a_c_{333} \end{vmatrix} &
-\begin{vmatrix} a_b_{211} & a_b_{233} \\ a_c_{311} & a_c_{333} \end{vmatrix} &
+\begin{vmatrix} a_b_{211} & a_b_{222} \\ a_c_{311} & a_c_{322} \end{vmatrix} \\
\\
-\begin{vmatrix} a_{122} & a_{133} \\ a_c_{322} & a_c_{333} \end{vmatrix} &
+\begin{vmatrix} a_{111} & a_{133} \\ a_c_{311} & a_c_{333} \end{vmatrix} &
-\begin{vmatrix} a_{111} & a_{122} \\ a_c_{311} & a_c_{322} \end{vmatrix} \\
\\
+\begin{vmatrix} a_{122} & a_{133} \\ a_b_{222} & a_b_{233} \end{vmatrix} &
-\begin{vmatrix} a_{111} & a_{133} \\ a_b_{211} & a_b_{233} \end{vmatrix} &
+\begin{vmatrix} a_{111} & a_{122} \\ a_b_{211} & a_b_{222} \end{vmatrix}
\end{bmatrix},</math>
where
:<math>\begin{vmatrix} a_{im}a & a_{in}b \\ a_{jm}c & a_{jn}d \end{vmatrix}
= \det\!\begin{bmatrix} a_{im}a & a_{in}b \\ a_{jm}c & a_{jn}d \end{bmatrix} .</math>
 
Its adjugate is the transpose of its cofactor matrix,
:<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \begin{bmatrix}
+\begin{vmatrix} a_b_{222} & a_b_{233} \\ a_c_{322} & a_c_{333} \end{vmatrix} &
-\begin{vmatrix} a_{122} & a_{133} \\ a_c_{322} & a_c_{333} \end{vmatrix} &
+\begin{vmatrix} a_{122} & a_{133} \\ a_b_{222} & a_b_{233} \end{vmatrix} \\
& & \\
-\begin{vmatrix} a_b_{211} & a_b_{233} \\ a_c_{311} & a_c_{333} \end{vmatrix} &
+\begin{vmatrix} a_{111} & a_{133} \\ a_c_{311} & a_c_{333} \end{vmatrix} &
-\begin{vmatrix} a_{111} & a_{133} \\ a_b_{211} & a_b_{233} \end{vmatrix} \\
& & \\
+\begin{vmatrix} a_b_{211} & a_b_{222} \\ a_c_{311} & a_c_{322} \end{vmatrix} &
-\begin{vmatrix} a_{111} & a_{122} \\ a_c_{311} & a_c_{322} \end{vmatrix} &
+\begin{vmatrix} a_{111} & a_{122} \\ a_b_{211} & a_b_{222} \end{vmatrix}
\end{bmatrix}.</math>
 
=== 3&thinsp; ×&thinsp; 3 numeric matrix ===
As a specific example, we have
:<math>\operatorname{adj}\!\begin{bmatrix}
Line 101:
 
== Properties ==
For any {{math|''n''&thinsp; ×&thinsp; ''n''}} matrix {{math|'''A'''}}, elementary computations show that adjugates have the following properties:
* <math>\operatorname{adj}(\mathbf{0}) = \mathbf{0}</math> and <math>\operatorname{adj}(\mathbf{I}) = \mathbf{I}</math>, where <math>\mathbf{0}</math> and <math>\mathbf{I}</math> areis the [[zeroidentity matrix|zero]] and [[identity matrices]], respectively.
* <math>\operatorname{adj}(\mathbf{0}) = \mathbf{0}</math>, where <math>\mathbf{0}</math> is the [[zero matrix]], except that if <math>n=1</math> then <math>\operatorname{adj}(\mathbf{0}) = \mathbf{I}</math>.
* <math>\operatorname{adj}(c \mathbf{A}) = c^{n - 1}\operatorname{adj}(\mathbf{A})</math> for any scalar {{mvar|c}}.
* <math>\operatorname{adj}(\mathbf{A}^\mathsf{T}) = \operatorname{adj}(\mathbf{A})^\mathsf{T}</math>.
Line 115 ⟶ 116:
* <math>\operatorname{adj}(\mathbf{A}^*) = \operatorname{adj}(\mathbf{A})^*</math>, where the asterisk denotes [[conjugate transpose]].
 
Suppose that {{math|'''B'''}} is another {{math|''n''&thinsp; ×&thinsp; ''n''}} matrix. Then
:<math>\operatorname{adj}(\mathbf{AB}) = \operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}).</math>
This can be [[mathematical proof|proved]] in three ways. One way, valid for any commutative ring, is a direct computation using the [[Cauchy–Binet formula]]. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices {{math|'''A'''}} and {{math|'''B'''}},
Line 134 ⟶ 135:
If {{math|'''A'''}} is invertible, this implies that {{math|adj('''A''')}} also commutes with {{math|'''B'''}}. Over the real or complex numbers, continuity implies that {{math|adj('''A''')}} commutes with {{math|'''B'''}} even when {{math|'''A'''}} is not invertible.
 
Finally, there is a more general proof than the second proof, which only requires that an ''n''&thinsp; ×&thinsp; ''n'' matrix has entries over a [[field (mathematics)|field]] with at least 2''n''&nbsp;+&thinsp; 1 elements (e.g. a 5&thinsp; ×&thinsp; 5 matrix over the integers [[modular arithmetic|modulo]] 11). {{math|det('''A'''+''t''&hairsp;'''I''')}} is a polynomial in ''t'' with [[degree of a polynomial|degree]] at most ''n'', so it has at most ''n'' [[root of a polynomial|roots]]. Note that the ''ij''&hairsp;th entry of {{math|adj(('''A'''+''t''&hairsp;'''I''')('''B'''))}} is a polynomial of at most order ''n'', and likewise for {{math|adj('''A'''+''t''&hairsp;'''I''')&hairsp;adj('''B''')}}. These two polynomials at the ''ij''&hairsp;th entry agree on at least ''n''&nbsp;+&thinsp; 1 points, as we have at least ''n''&nbsp;+&thinsp; 1 elements of the field where {{math|'''A'''+''t''&hairsp;'''I'''}} is invertible, and we have proven the identity for invertible matrices. Polynomials of degree ''n'' which agree on ''n''&nbsp;+&thinsp; 1 points must be identical (subtract them from each other and you have ''n''&nbsp;+&thinsp; 1 roots for a polynomial of degree at most ''n'' – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of ''t''. Thus, they take the same value when ''t'' = 0.
 
Using the above properties and other elementary computations, it is straightforward to show that if {{math|'''A'''}} has one of the following properties, then {{math|adj&hairsp;'''A'''}} does as well:
* [[Upper triangular matrix|upper triangular]],
* [[Lower triangular matrix|lower triangular]],
* [[Diagonal matrix|Diagonaldiagonal]],
* [[Orthogonal matrix|Orthogonalorthogonal]],
* [[Unitary matrix|Unitaryunitary]],
* [[Symmetric matrix|Symmetricsymmetric]],
* [[Hermitian matrix|Hermitian]],
* [[Skew-symmetricNormal matrix|Skew-symmetricnormal]],.
 
* [[Skew-Hermitian]],
If {{math|'''A'''}} is [[Skew-symmetric matrix|skew-symmetric]], then {{math|adj('''A''')}} is skew-symmetric for even ''n'' and symmetric for odd ''n''. Similarly, if {{math|'''A'''}} is [[Skew-Hermitian matrix|skew-Hermitian]], then {{math|adj('''A''')}} is skew-Hermitian for even ''n'' and Hermitian for odd ''n''.
* [[Normal matrix|Normal]].
 
If {{math|'''A'''}} is invertible, then, as noted above, there is a formula for {{math|adj('''A''')}} in terms of the determinant and inverse of {{math|'''A'''}}. When {{math|'''A'''}} is not invertible, the adjugate satisfies different but closely related formulas.
* If {{math|1=rk('''A''') ≤ ''n'' − 2}}, then {{math|1=adj('''A''') = '''0'''}}.
* If {{math|1=rk('''A''') = ''n'' −&thinsp; 1}}, then {{math|1=rk(adj('''A''')) = 1}}. (Some minor is non-zero, so {{math|adj('''A''')}} is non-zero and hence has [[rank (linear algebra)|rank]] at least one; the identity {{math|1=adj('''A''')&hairsp;'''A''' = '''0'''}} implies that the [[dimension (vector space)|dimension]] of the [[nullspace]] of {{math|adj('''A''')}} is at least {{math|''n''&nbsp;−&thinsp; 1}}, so its rank is at most one.) It follows that {{math|1=adj('''A''') = ''α'''''xy'''<sup>T</sup>}}, where {{math|''α''}} is a scalar and {{math|'''x'''}} and {{math|'''y'''}} are vectors such that {{math|1='''Ax''' = '''0'''}} and {{math|1='''A'''<sup>T</sup>&thinsp; '''y''' = '''0'''}}.
 
=== Column substitution and Cramer's rule ===
Line 157 ⟶ 158:
Partition {{math|'''A'''}} into [[column vector]]s:
:<math>\mathbf{A} = \begin{bmatrix}\mathbf{a}_1 & \cdots & \mathbf{a}_n\end{bmatrix}.</math>
Let {{math|'''b'''}} be a column vector of size {{math|''n''}}. Fix {{math|1&thinsp; ≤ ''i'' ≤ ''n''}} and consider the matrix formed by replacing column {{math|''i''}} of {{math|'''A'''}} by {{math|'''b'''}}:
:<math>(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\ \stackrel{\text{def}}{=}\ \begin{bmatrix} \mathbf{a}_1 & \cdots & \mathbf{a}_{i-1} & \mathbf{b} & \mathbf{a}_{i+1} & \cdots & \mathbf{a}_n \end{bmatrix}.</math>
Laplace expand the determinant of this matrix along column {{mvar|i}}. The result is entry {{mvar|i}} of the product {{math|adj('''A''')'''b'''}}. Collecting these determinants for the different possible {{mvar|i}} yields an equality of column vectors
Line 173 ⟶ 174:
Let the [[characteristic polynomial]] of {{math|'''A'''}} be
:<math>p(s) = \det(s\mathbf{I} - \mathbf{A}) = \sum_{i=0}^n p_i s^i \in R[s].</math>
The first [[divided difference]] of {{math|''p''}} is a [[symmetric polynomial]] of degree {{math|''n''&nbsp;−&thinsp; 1}},
:<math>\Delta p(s, t) = \frac{p(s) - p(t)}{s - t} = \sum_{0 \le j + k < n} p_{j+k+1} s^j t^k \in R[s, t].</math>
Multiply {{math|''s'''''I''' − '''A'''}} by its adjugate. Since {{math|1=''p''('''A''') = '''0'''}} by the [[Cayley–Hamilton theorem]], some elementary manipulations reveal
Line 199 ⟶ 200:
:<math>s+\sum_{\ell=1}^{n-1}\ell k_\ell = n - 1.</math>
 
For the 2&thinsp; ×&thinsp; 2 case, this gives
:<math>\operatorname{adj}(\mathbf{A})=\mathbf{I}_2(\operatorname{tr}\mathbf{A}) - \mathbf{A}.</math>
For the 3&thinsp; ×&thinsp; 3 case, this gives
:<math>\operatorname{adj}(\mathbf{A})=\frac{1}{2}\mathbf{I}_3\!\left( (\operatorname{tr}\mathbf{A})^2-\operatorname{tr}\mathbf{A}^2\right) - \mathbf{A}(\operatorname{tr}\mathbf{A}) + \mathbf{A}^2 .</math>
For the 4&thinsp; ×&thinsp; 4 case, this gives
:<math>\operatorname{adj}(\mathbf{A})=
\frac{1}{6}\mathbf{I}_4\!\left(
Line 215 ⟶ 216:
 
The same formula follows directly from the terminating step of the [[Faddeev–LeVerrier algorithm]], which efficiently determines the [[characteristic polynomial]] of {{math|'''A'''}}.
 
In general, adjugate matrix of arbitrary dimension N matrix can be computed by Einstein's convention.
:<math>(\operatorname{adj}(\mathbf{A}))_{i_N}^{j_N} = \frac{1}{(N-1)!} \epsilon_{i_1 i_2 \ldots i_N} \epsilon^{j_1 j_2 \ldots j_N} A_{j_1}^{i_1} A_{j_2}^{i_2} \ldots A_{j_{N-1}}^{i_{N-1}}
</math>
 
== Relation to exterior algebras ==
The adjugate can be viewed in abstract terms using [[exterior algebra]]s. Let {{math|''V''}} be an {{math|''n''}}-dimensional [[vector space]]. The [[exterior product]] defines a bilinear pairing
:<math display=block>V \times \wedge^{n-1} V \to \wedge^n V.</math>
Abstractly, <math>\wedge^n V</math> is [[isomorphic]] to {{math|'''R'''}}, and under any such isomorphism the exterior product is a [[perfect pairing]]. That Thereforeis, it yields an isomorphism
:<math display=block>\phi \colon V\ \xrightarrow{\cong}\ \operatorname{Hom}(\wedge^{n-1} V, \wedge^n V).</math>
Explicitly,This this pairingisomorphism sends each {{math|'''v''' ∈ ''V''}} to the map <math>\phi_{\mathbf{v}}</math>, wheredefined by
:<math display=block>\phi_\mathbf{v}(\alpha) = \mathbf{v} \wedge \alpha.</math>
Suppose that {{math|''T'' : ''V'' &rarr; ''V''}} is a [[linear transformation]]. [[Pullback]] by the {{math|(''n''&nbsp;−&thinsp; 1)}}stth exterior power of {{math|''T''}} induces a morphism of {{math|Hom}} spaces. The '''adjugate''' of {{math|''T''}} is the composite
:<math display=block>V\ \xrightarrow{\phi}\ \operatorname{Hom}(\wedge^{n-1} V, \wedge^n V)\ \xrightarrow{(\wedge^{n-1} T)^*}\ \operatorname{Hom}(\wedge^{n-1} V, \wedge^n V)\ \xrightarrow{\phi^{-1}}\ V.</math>
 
If {{math|1=''V'' = '''R'''<sup>''n''</sup>}} is endowed with its [[canonical basis]] {{math|'''e'''<sub>1</sub>, ..., '''e'''<sub>''n''</sub>}}, and if the matrix of {{math|''T''}} in this [[basis (linear algebra)|basis]] is {{math|'''A'''}}, then the adjugate of {{math|''T''}} is the adjugate of {{math|'''A'''}}. To see why, give <math>\wedge^{n-1} \mathbf{R}^n</math> the basis
:<math display=block>\{\mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_k \wedge \dots \wedge \mathbf{e}_n\}_{k=1}^n.</math>
Fix a basis vector {{math|'''e'''<sub>''i''</sub>}} of {{math|'''R'''<sup>''n''</sup>}}. The image of {{math|'''e'''<sub>''i''</sub>}} under <math>\phi</math> is determined by where it sends basis vectors:
:<math display=block>\phi_{\mathbf{e}_i}(\mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_k \wedge \dots \wedge \mathbf{e}_n)
= \begin{cases} (-1)^{i-1} \mathbf{e}_1 \wedge \dots \wedge \mathbf{e}_n, &\text{if}\ k = i, \\ 0 &\text{otherwise.} \end{cases}</math>
On basis vectors, the {{math|(''n''&nbsp;−&thinsp; 1)}}st exterior power of {{math|''T''}} is
:<math display=block>\mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_j \wedge \dots \wedge \mathbf{e}_n \mapsto \sum_{k=1}^n (\det A_{jk}) \mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_k \wedge \dots \wedge \mathbf{e}_n.</math>
Each of these terms maps to zero under <math>\phi_{\mathbf{e}_i}</math> except the {{math|1=''k'' = ''i''}} term. Therefore, the pullback of <math>\phi_{\mathbf{e}_i}</math> is the linear transformation for which
:<math display=block>\mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_j \wedge \dots \wedge \mathbf{e}_n \mapsto (-1)^{i-1} (\det A_{ji}) \mathbf{e}_1 \wedge \dots \wedge \mathbf{e}_n,.</math>
thatThat is, it equals
:<math display=block>\sum_{j=1}^n (-1)^{i+j} (\det A_{ji})\phi_{\mathbf{e}_j}.</math>
Applying the inverse of <math>\phi</math> shows that the adjugate of {{math|''T''}} is the linear transformation for which
:<math display=block>\mathbf{e}_i \mapsto \sum_{j=1}^n (-1)^{i+j}(\det A_{ji})\mathbf{e}_j.</math>
Consequently, its matrix representation is the adjugate of {{math|'''A'''}}.
 
If {{math|''V''}} is endowed with an [[inner product]] and a volume form, then the map {{math|''φ''}} can be decomposed further. In this case, {{math|''φ''}} can be understood as the composite of the [[Hodge star operator]] and dualization. Specifically, if {{math|ω}} is the volume form, then it, together with the inner product, determines an isomorphism
:<math display=block>\omega^\vee \colon \wedge^n V \to \mathbf{R}.</math>
This induces an isomorphism
:<math display=block>\operatorname{Hom}(\wedge^{n-1} \mathbf{R}^n, \wedge^n \mathbf{R}^n) \cong \wedge^{n-1} (\mathbf{R}^n)^\vee.</math>
A vector {{math|'''v'''}} in {{math|'''R'''<sup>''n''</sup>}} corresponds to the linear functional
:<math display=block>(\alpha \mapsto \omega^\vee(\mathbf{v} \wedge \alpha)) \in \wedge^{n-1} (\mathbf{R}^n)^\vee.</math>
By the definition of the Hodge star operator, this linear functional is dual to {{math|*'''v'''}}. That is, {{math|ω<sup>∨</sup>∘&thinsp; φ}} equals {{math|'''v''' ↦ *'''v'''<sup>∨</sup>}}.
 
== Higher adjugates ==
Let {{math|'''A'''}} be an {{math|''n''&thinsp; ×&thinsp; ''n''}} matrix, and fix {{math|''r'' &ge; 0}}. The '''{{math|''r''}}th higher adjugate''' of {{math|'''A'''}} is an <math display="inline">\binom{n}{r} \!\times\! \binom{n}{r}</math> matrix, denoted {{math|adj<sub>''r''</sub>&thinsp; '''A'''}}, whose entries are indexed by size {{math|''r''}} [[subset]]s {{math|''I''}} and {{math|''J''}} of {{math|{1, ..., ''m''<nowiki>}</nowiki>}} {{Citation needed|date=November 2023}}. Let {{math|''I''{{i sup|c}}}} and {{math|''J''{{i sup|c}}}} denote the [[complement (set theory)|complements]] of {{math|''I''}} and {{math|''J''}}, respectively. Also let <math>\mathbf{A}_{I^c, J^c}</math> denote the submatrix of {{math|'''A'''}} containing those rows and columns whose indices are in {{math|''I''{{i sup|c}}}} and {{math|''J''{{i sup|c}}}}, respectively. Then the {{math|(''I'', ''J'')}} entry of {{math|adj<sub>''r''</sub> '''A'''}} is
:<math>(-1)^{\sigma(I) + \sigma(J)}\det \mathbf{A}_{J^c, I^c},</math>
where {{math|σ(''I'')}} and {{math|σ(''J'')}} are the sum of the elements of {{math|''I''}} and {{math|''J''}}, respectively.
 
Basic properties of higher adjugates include {{Citation needed|date=November 2023}}:
* {{math|1=adj<sub>0</sub>('''A''') = det&thinsp; '''A'''}}.
* {{math|1=adj<sub>1</sub>('''A''') = adj&thinsp; '''A'''}}.
* {{math|1=adj<sub>''n''</sub>('''A''') = 1}}.
* {{math|1=adj<sub>''r''</sub>('''BA''') = adj<sub>''r''</sub>('''A''')&thinsp; adj<sub>''r''</sub>('''B''')}}.
* <math>\operatorname{adj}_r(\mathbf{A})C_r(\mathbf{A}) = C_r(\mathbf{A})\operatorname{adj}_r(\mathbf{A}) = (\det \mathbf{A})I_{\binom{n}{r}}</math>, where {{math|''C''<sub>''r''</sub>('''A''')}} denotes the {{math|''r''}}&hairsp;th [[compound matrix]].
 
Higher adjugates may be defined in abstract algebraic terms in a similar fashion to the usual adjugate, substituting <math>\wedge^r V</math> and <math>\wedge^{n-r} V</math> for <math>V</math> and <math>\wedge^{n-1} V</math>, respectively.
Line 279 ⟶ 284:
* [[Jacobi's formula]]
* [[Faddeev–LeVerrier algorithm]]
* [[NormalCompound matrix|Normal]].
 
== References ==
Line 291 ⟶ 297:
* [http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/property.html#adjoint Matrix Reference Manual]
*[http://www.elektro-energetika.cz/calculations/matreg.php?language=english Online matrix calculator (determinant, track, inverse, adjoint, transpose)] Compute Adjugate matrix up to order 8
* {{cite web|url=http://www.wolframalpha.com/input/?i=adjugate+of+{+{+a%2C+b%2C+c+}%2C+{+d%2C+e%2C+f+}%2C+{+g%2C+h%2C+i+}+}|url-status=live|archive-url=|last=|first=|date=|title=<nowiki>Adjugate of { { a, b, c }, { d, e, f }, { g, h, i } }</nowiki>|archive-date=|access-date=|work=[[Wolfram Alpha]]}}
 
{{Matrix classes}}