Definite matrix: Difference between revisions

Content deleted Content added
Not all invertible matrices are positive definite
m Changed transpose notation to match that of the transpose article
Line 2:
{{hatnote|Not to be confused with [[Positive matrix]] and [[Totally positive matrix]].}}
 
In [[mathematics]], a symmetric matrix <math>M</math> with [[real number|real]] entries is '''positive-definite''' if the real number <math>z^{\textsfoperatorname{T}}Mz</math> is positive for every nonzero real [[column vector]] <math>z,</math> where <math>z^\textsfoperatorname{T}</math> is the [[transpose]] of {{nowrap|<math>z</math>.}}<ref>{{cite journal|doi=10.1002/9780470173862.app3 | title=Appendix C: Positive Semidefinite and Positive Definite Matrices | journal=Parameter Estimation for Scientists and Engineers | pages=259–263| doi-access= }}</ref> More generally, a [[Hermitian matrix]] (that is, a [[complex matrix]] equal to its [[conjugate transpose]]) is
'''positive-definite''' if the real number <math>z^* Mz</math> is positive for every nonzero complex column vector <math>z,</math> where <math>z^*</math> denotes the conjugate transpose of <math>z.</math>
 
'''Positive semi-definite''' matrices are defined similarly, except that the scalars <math>z^\textsfoperatorname{T}Mz</math> and <math>z^* Mz</math> are required to be positive ''or zero'' (that is, nonnegative). '''Negative-definite''' and '''negative semi-definite''' matrices are defined analogously. A matrix that is not positive semi-definite and not negative semi-definite is sometimes called '''indefinite'''.
 
A matrix is thus positive-definite if and only if it is the matrix of a [[positive-definite quadratic form]] or [[Hermitian form]]. In other words, a matrix is positive-definite if and only if it defines an [[inner product]].
Line 21:
 
== Definitions ==
In the following definitions, <math>\mathbf{x}^\textsfoperatorname{T}</math> is the transpose of <math>\mathbf x</math>, <math>\mathbf{x}^*</math> is the [[conjugate transpose]] of <math>\mathbf x</math> and <math>\mathbf{0}</math> denotes the ''n''-dimensional zero-vector.
 
=== Definitions for real matrices ===
An <math>n \times n</math> symmetric real matrix <math>M</math> is said to be '''positive-definite''' if <math>\mathbf{x}^\textsfoperatorname{T} M\mathbf{x} > 0</math> for all non-zero <math>\mathbf{x}</math> in <math>\R^n</math>. Formally,
 
{{Equation box 1
|indent =
|title=
|equation = <math>M \text{ positive-definite} \quad \iff \quad \mathbf{x}^\textsfoperatorname{T} M\mathbf{x} > 0 \text{ for all } \mathbf{x} \in \R^n \setminus \{\mathbf{0}\}</math>
|cellpadding= 6
|border
Line 35:
|background colour=#F5FFFA}}
 
An <math>n \times n</math> symmetric real matrix <math>M</math> is said to be '''positive-semidefinite''' or '''non-negative-definite''' if <math>\mathbf{x}^\textsfoperatorname{T} M\mathbf{x} \geq 0</math> for all <math>\mathbf{x}</math> in <math>\R^n</math>. Formally,
 
{{Equation box 1
|indent =
|title=
|equation = <math>M \text{ positive semi-definite} \quad \iff \quad \mathbf{x}^\textsfoperatorname{T} M\mathbf{x} \geq 0 \text{ for all } \mathbf{x} \in \R^n </math>
|cellpadding= 6
|border
Line 46:
|background colour=#F5FFFA}}
 
An <math>n \times n</math> symmetric real matrix <math>M</math> is said to be '''negative-definite''' if <math>\mathbf{x}^\textsfoperatorname{T} M\mathbf{x} < 0</math> for all non-zero <math>\mathbf{x}</math> in <math>\R^n</math>. Formally,
 
{{Equation box 1
|indent =
|title=
|equation = <math>M \text{ negative-definite} \quad \iff \quad \mathbf{x}^\textsfoperatorname{T} M\mathbf{x} < 0 \text{ for all } \mathbf{x} \in \R^n \setminus \{\mathbf{0}\}</math>
|cellpadding= 6
|border
Line 57:
|background colour=#F5FFFA}}
 
An <math>n \times n</math> symmetric real matrix <math>M</math> is said to be '''negative-semidefinite''' or '''non-positive-definite''' if <math>x^\textsfoperatorname{T} Mx \leq 0</math> for all <math>x</math> in <math>\R^n</math>. Formally,
 
{{Equation box 1
|indent =
|title=
|equation = <math>M \text{ negative semi-definite} \quad \iff \quad \mathbf{x}^\textsfoperatorname{T} M\mathbf{x} \leq 0 \text{ for all } \mathbf{x} \in \R^n</math>
|cellpadding= 6
|border
Line 124:
For complex matrices, the most common definition says that <math>M</math> is positive-definite if and only if <math>\mathbf{z}^* M\mathbf{z}</math> is real and positive for every non-zero complex column vectors <math>\mathbf z</math>. This condition implies that <math>M</math> is Hermitian (i.e. its transpose is equal to its conjugate), since <math>\mathbf{z}^* M\mathbf{z}</math> being real, it equals its conjugate transpose <math>\mathbf{z}^* M^*\mathbf{z}</math> for every <math>z,</math> which implies <math>M=M^*</math>.
 
By this definition, a positive-definite ''real'' matrix <math>M</math> is Hermitian, hence symmetric; and <math>\mathbf{z}^\textsfoperatorname{T} M\mathbf{z}</math> is positive for all non-zero ''real'' column vectors <math>\mathbf{z}</math>. However the last condition alone is not sufficient for <math>M</math> to be positive-definite. For example, if
<math display="block">M = \begin{bmatrix} 1 & 1 \\ -1 & 1 \end{bmatrix},</math>
 
then for any real vector <math>\mathbf{z}</math> with entries <math>a</math> and <math>b</math> we have <math>\mathbf{z}^\textsfoperatorname{T} M\mathbf{z} = \left(a + b\right)a + \left(-a + b\right)b = a^2 + b^2</math>, which is always positive if <math>\mathbf z</math> is not zero. However, if <math>\mathbf z</math> is the complex vector with entries <math>1</math> and <math>i</math>, one gets
 
<math display="block">\mathbf{z}^* M \mathbf{z} = \begin{bmatrix}1 & -i\end{bmatrix} M \begin{bmatrix}1 \\ i\end{bmatrix} = \begin{bmatrix}1 + i & 1 - i\end{bmatrix} \begin{bmatrix}1 \\ i\end{bmatrix} = 2+2i</math>
Line 133:
which is not real. Therefore, <math>M</math> is not positive-definite.
 
On the other hand, for a ''symmetric'' real matrix <math>M</math>, the condition "<math>\mathbf{z}^\textsfoperatorname{T} M\mathbf{z} > 0</math> for all nonzero real vectors <math>\mathbf z</math>" ''does'' imply that <math>M</math> is positive-definite in the complex sense.
 
===Notation===
Line 145:
{{unordered list
| The [[identity matrix]] <math>I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}</math> is positive-definite (and as such also positive semi-definite). It is a real symmetric matrix, and, for any non-zero column vector '''z''' with real entries ''a'' and ''b'', one has
<math display="block"> \mathbf{z}^\textsfoperatorname{T}I\mathbf{z} = \begin{bmatrix} a & b \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} a \\ b \end{bmatrix} = a^2 + b^2.</math>
Seen as a complex matrix, for any non-zero column vector ''z'' with complex entries ''a'' and ''b'' one has
<math display="block">\mathbf{z}^*I\mathbf{z} = \begin{bmatrix} \overline{a} & \overline{b} \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} a \\ b\end{bmatrix} = \overline{a}a + \overline{b}b = |a|^2 + |b|^2.</math>
Line 153:
is positive-definite since for any non-zero column vector '''z''' with entries ''a'', ''b'' and ''c'', we have
<math display="block">\begin{align}
\mathbf{z}^\textsfoperatorname{T} M\mathbf{z} = \left(\mathbf{z}^\textsfoperatorname{T}M\right) \mathbf{z}
&= \begin{bmatrix} (2a - b) & (-a + 2b - c) & (-b + 2c) \end{bmatrix}
\begin{bmatrix} a \\ b \\ c \end{bmatrix} \\
Line 164:
 
This result is a sum of squares, and therefore non-negative; and is zero only if <math>a = b = c = 0</math>, that is, when '''z''' is the zero vector.
| For any real [[invertible matrix]] <math>A</math>, the product <math>A^\textsfoperatorname{T}A</math> is a positive definite matrix (if the means of the columns of A are 0, then this is also called the [[covariance matrix]]). A simple proof is that for any non-zero vector <math>\mathbf z</math>, the condition <math>\mathbf{z}^\textsfoperatorname{T} A^\textsfoperatorname{T} A\mathbf{z} = (A\mathbf{z})^\textsfoperatorname{T}(A\mathbf{z}) = \|A\mathbf{z}\|^2 > 0,</math> since the invertibility of matrix <math>A</math> means that <math>A\mathbf{z} \neq 0.</math>
| The example <math>M</math> above shows that a matrix in which some elements are negative may still be positive definite. Conversely, a matrix whose entries are all positive is not necessarily positive definite, as for example
<math display="block">N = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix},</math>
for which <math>\begin{bmatrix} -1 & 1 \end{bmatrix}N\begin{bmatrix} -1 & 1 \end{bmatrix}^\textsfoperatorname{T} = -2 < 0.</math>
}}
 
Line 190:
 
When <math>M</math> is real, <math>B</math> can be real as well and the decomposition can be written as
<math display="block">M = B^\textsfoperatorname{T} B.</math>
 
<math>M</math> is positive definite if and only if such a decomposition exists with <math>B</math> [[Invertible matrix|invertible]].
Line 251:
 
== Other characterizations ==
Let <math>M</math> be an <math>n \times n</math> [[Hermitian matrix|real symmetric matrix]], and let <math>B_1(M) := \{x\in \R^n : x^\operatorname{T} M x \leq 1\}</math> be the "unit ball" defined by <math>M</math>. Then we have the following
 
* <math>B_1(vv^\mathsfoperatorname{T})</math> is a solid slab sandwiched between <math>\pm \{w: \langle w, v\rangle = 1\}</math>.
* <math>M\succeq 0</math> if and only if <math>B_1(M)</math> is an ellipsoid, or an ellipsoidal cylinder.
* <math>M\succ 0</math> if and only if <math>B_1(M)</math> is bounded, that is, it is an ellipsoid.
* If <math>N\succ 0</math>, then <math>M \succeq N</math> if and only if <math>B_1(M) \subseteq B_1(N)</math>; <math>M \succ N</math> if and only if <math>B_1(M) \subseteq \operatorname{int}(B_1(N))</math>.
* If <math>N\succ 0</math> , then <math>M \succeq \frac{vv^\mathsfoperatorname{T}}{v^\mathsfoperatorname{T} N v}</math> for all <math>v \neq 0</math> if and only if <math display="inline">B_1(M) \subset \bigcap_{v^\mathsfoperatorname{T} N v = 1} B_1(vv^\mathsfoperatorname{T})</math>. So, since the polar dual of an ellipsoid is also an ellipsoid with the same principal axes, with inverse lengths, we have <math display="block">B_1(N^{-1}) = \bigcap_{v^\mathsfoperatorname{T} N v = 1} B_1(vv^\mathsfoperatorname{T}) = \bigcap_{v^\mathsfoperatorname{T} N v = 1}\{w: |\langle w, v\rangle| \leq 1\}</math> That is, if <math>N</math> is positive-definite, then <math>M \succeq \frac{vv^\mathsfoperatorname{T}}{v^\mathsfoperatorname{T} N v}</math> for all <math>v\neq 0</math> if and only if <math>M \succeq N^{-1}</math>
Let <math>M</math> be an <math>n \times n</math> [[Hermitian matrix]]. The following properties are equivalent to <math>M</math> being positive definite:
; The associated sesquilinear form is an inner product: The [[sesquilinear form]] defined by <math>M</math> is the function <math>\langle \cdot, \cdot\rangle</math> from <math>\Complex^n \times \Complex^n</math> to <math>\Complex^n</math> such that <math>\langle x, y \rangle := y^*M x</math> for all <math>x</math> and <math>y</math> in <math>\Complex^n</math>, where <math>y^*</math> is the conjugate transpose of <math>y</math>. For any complex matrix <math>M</math>, this form is linear in <math>x</math> and semilinear in <math>y</math>. Therefore, the form is an [[inner product]] on <math>\Complex^n</math> if and only if <math>\langle z, z \rangle</math> is real and positive for all nonzero <math>z</math>; that is if and only if <math>M</math> is positive definite. (In fact, every inner product on <math>\Complex^n</math> arises in this fashion from a Hermitian positive definite matrix.)
Line 267:
== Quadratic forms ==
{{Main|Definite quadratic form}}
The (purely) [[quadratic form]] associated with a real <math>n \times n</math> matrix <math>M</math> is the function <math>Q : \mathbb{R}^n \to \mathbb{R}</math> such that <math>Q(x) = x^\textsfoperatorname{T} Mx</math> for all <math>x</math>. <math>M</math> can be assumed symmetric by replacing it with <math>\tfrac{1}{2} \left(M + M^\textsfoperatorname{T}\right)</math>.
 
A symmetric matrix <math>M</math> is positive definite if and only if its quadratic form is a [[strictly convex function]].
 
More generally, any [[quadratic function]] from <math>\mathbb{R}^n</math> to <math>\mathbb{R}</math> can be written as <math>x^\textsfoperatorname{T} Mx + x^\textsfoperatorname{T} b + c</math> where <math>M</math> is a symmetric <math>n \times n</math> matrix, <math>b</math> is a real <math>n</math>-vector, and <math>c</math> a real constant. In the <math>n=1</math> case, this is a parabola, and just like in the <math>n=1</math> case, we have
 
'''Theorem:''' This quadratic function is strictly convex, and hence has a unique finite global minimum, if and only if <math>M</math> is positive definite.
 
'''Proof:''' If <math>M</math> is positive definite, then the function is strictly convex. Its gradient is zero at the unique point of <math>M^{-1}b</math>, which must be the global minimum since the function is strictly convex. If <math>M</math> is not positive definite, then there exists some vector <math>v</math> such that <math>v^\operatorname{T} M v \leq 0</math>, so the function <math>f(t) := (vt)^\operatorname{T} M (vt) + b^t\operatorname{T} (vt) + c</math> is a line or a downward parabola, thus not strictly convex and not having a global minimum.
 
For this reason, positive definite matrices play an important role in [[optimization (mathematics)|optimization]] problems.
Line 282:
One symmetric matrix and another matrix that is both symmetric and positive definite can be [[diagonalizable matrix#Simultaneous diagonalization|simultaneously diagonalized]]. This is so although simultaneous diagonalization is not necessarily performed with a [[Matrix similarity|similarity transformation]]. This result does not extend to the case of three or more matrices. In this section we write for the real case. Extension to the complex case is immediate.
 
Let <math>M</math> be a symmetric and <math>N</math> a symmetric and positive definite matrix. Write the generalized eigenvalue equation as <math>\left(M - \lambda N\right)\mathbf{x} = 0</math> where we impose that <math>x</math> be normalized, i.e. <math>\mathbf{x}^\textsfoperatorname{T} N\mathbf{x} = 1</math>. Now we use [[Cholesky decomposition]] to write the inverse of <math>N</math> as <math>Q^\textsfoperatorname{T} Q</math>. Multiplying by <math>Q</math> and letting <math>\mathbf{x} = Q^\textsfoperatorname{T} \mathbf{y}</math>, we get <math>Q\left(M - \lambda N\right)Q^\textsfoperatorname{T} \mathbf{y} = 0</math>, which can be rewritten as <math>\left(QMQ^\textsfoperatorname{T}\right)\mathbf{y} = \lambda \mathbf{y}</math> where <math>\mathbf{y}^\textsfoperatorname{T} \mathbf{y} = 1</math>. Manipulation now yields <math>MX = NX\Lambda</math> where <math>X</math> is a matrix having as columns the generalized eigenvectors and <math>\Lambda</math> is a diagonal matrix of the generalized eigenvalues. Now premultiplication with <math>X^\textsfoperatorname{T}</math> gives the final result: <math>X^\textsfoperatorname{T} MX = \Lambda</math> and <math>X^\textsfoperatorname{T} NX = I</math>, but note that this is no longer an orthogonal diagonalization with respect to the inner product where <math>\mathbf{y}^\textsfoperatorname{T} \mathbf{y} = 1</math>. In fact, we diagonalized <math>M</math> with respect to the inner product induced by <math>N</math>.<ref>{{harvtxt|Horn|Johnson|2013}}, p. 485, Theorem 7.6.1</ref>
 
Note that this result does not contradict what is said on simultaneous diagonalization in the article [[diagonalizable matrix#Simultaneous diagonalization|Diagonalizable matrix]], which refers to simultaneous diagonalization by a similarity transformation. Our result here is more akin to a simultaneous diagonalization of two quadratic forms, and is useful for optimization of one form under conditions on the other.
Line 332:
===Convexity===
The set of positive semidefinite symmetric matrices is [[convex set|convex]]. That is, if <math>M</math> and <math>N</math> are positive semidefinite, then for any <math>\alpha</math> between 0 and 1, <math>\alpha M + \left(1 - \alpha\right) N</math> is also positive semidefinite. For any vector <math>\mathbf x</math>:
<math display="block">\mathbf{x}^\textsfoperatorname{T} \left(\alpha M + \left(1 - \alpha\right)N\right)\mathbf{x} = \alpha \mathbf{x}^\textsfoperatorname{T} M\mathbf{x} + (1 - \alpha) \mathbf{x}^\textsfoperatorname{T} N\mathbf{x} \geq 0.</math>
 
This property guarantees that [[semidefinite programming]] problems converge to a globally optimal solution.
Line 356:
# If <math>M_k</math> denotes the leading <math>k \times k</math> minor, <math>\det\left(M_k\right)/\det\left(M_{k-1}\right)</math> is the ''k''th pivot during [[LU decomposition]].
# A matrix is negative definite if its ''k-''th order leading [[principal minor]] is negative when <math>k</math> is odd, and positive when <math>k</math> is even.
# If <math>M</math> is a real positive definite matrix, then there exists a positive real number <math>m</math> such that for every vector <math>\mathbf{v}</math>, <math>\mathbf{v}^\operatorname{T} M \mathbf{v}\geq m\|\mathbf{v}\|_2^2</math>.
# A Hermitian matrix is positive semidefinite if and only if all of its principal minors are nonnegative. It is however not enough to consider the leading principal minors only, as is checked on the diagonal matrix with entries 0 and −1.
 
Line 365:
where each block is <math>n \times n</math>. By applying the positivity condition, it immediately follows that <math>A</math> and <math>D</math> are hermitian, and <math>C = B^*</math>.
 
We have that <math>\mathbf{z}^* M\mathbf{z} \ge 0</math> for all complex <math>\mathbf z</math>, and in particular for <math>\mathbf{z} = [\mathbf{v}, 0]^\textsfoperatorname{T}</math>. Then
<math display="block">\begin{bmatrix} \mathbf{v}^* & 0 \end{bmatrix} \begin{bmatrix} A & B \\ B^* & D \end{bmatrix} \begin{bmatrix} \mathbf{v} \\ 0 \end{bmatrix} = \mathbf{v}^* A\mathbf{v} \ge 0.</math>
 
Line 373:
 
=== Local extrema ===
A general [[quadratic form]] <math>f(\mathbf{x})</math> on <math>n</math> real variables <math>x_1, \ldots, x_n</math> can always be written as <math>\mathbf{x}^\textsfoperatorname{T} M \mathbf{x}</math> where <math>\mathbf{x}</math> is the column vector with those variables, and <math>M</math> is a symmetric real matrix. Therefore, the matrix being positive definite means that <math>f</math> has a unique minimum (zero) when <math>\mathbf{x}</math> is zero, and is strictly positive for any other <math>\mathbf{x}</math>.
 
More generally, a twice-differentiable real function <math>f</math> on <math>n</math> real variables has local minimum at arguments <math>x_1, \ldots, x_n</math> if its [[gradient]] is zero and its [[Hessian matrix|Hessian]] (the matrix of all second derivatives) is positive semi-definite at that point. Similar statements can be made for negative definite and semi-definite matrices.
Line 381:
 
== Extension for non-Hermitian square matrices ==
The definition of positive definite can be generalized by designating any complex matrix <math>M</math> (e.g. real non-symmetric) as positive definite if <math>\Re\left(\mathbf{z}^* M\mathbf{z}\right) > 0</math> for all non-zero complex vectors <math>\mathbf z</math>, where <math>\Re(c)</math> denotes the real part of a complex number <math>c</math>.<ref name="mathw">Weisstein, Eric W. ''[http://mathworld.wolfram.com/PositiveDefiniteMatrix.html Positive Definite Matrix.]'' From ''MathWorld--A Wolfram Web Resource''. Accessed on 2012-07-26</ref> Only the Hermitian part <math display="inline">\frac{1}{2}\left(M + M^*\right)</math> determines whether the matrix is positive definite, and is assessed in the narrower sense above. Similarly, if <math>\mathbf x</math> and <math>M</math> are real, we have <math>\mathbf{x}^\textsfoperatorname{T} M \mathbf{x} > 0</math> for all real nonzero vectors <math>\mathbf x</math> if and only if the symmetric part <math display="inline">\frac{1}{2}\left(M + M^\textsfoperatorname{T}\right)</math> is positive definite in the narrower sense. It is immediately clear that <math display="inline">\mathbf{x}^\textsfoperatorname{T} M \mathbf{x} = \sum_{ij} x_i M_{ij} x_j</math>is insensitive to transposition of ''M''.
 
Consequently, a non-symmetric real matrix with only positive eigenvalues does not need to be positive definite. For example, the matrix <math>M = \left[\begin{smallmatrix} 4 & 9 \\ 1 & 4 \end{smallmatrix}\right]</math> has positive eigenvalues yet is not positive definite; in particular a negative value of <math>\mathbf{x}^\textsfoperatorname{T} M\mathbf{x}</math> is obtained with the choice <math>\mathbf{x} = \left[\begin{smallmatrix} -1 \\ 1 \end{smallmatrix}\right] </math> (which is the eigenvector associated with the negative eigenvalue of the symmetric part of {{nowrap|<math>M</math>).}}
 
In summary, the distinguishing feature between the real and complex case is that, a [[Bounded operator|bounded]] positive operator on a complex Hilbert space is necessarily Hermitian, or self adjoint. The general claim can be argued using the [[polarization identity]]. That is no longer true in the real case.
Line 389:
== Applications ==
=== Heat conductivity matrix ===
Fourier's law of heat conduction, giving heat flux <math>\mathbf q</math> in terms of the temperature gradient <math>\mathbf g = \nabla T</math> is written for anisotropic media as <math>\mathbf{q} = -K\mathbf{g}</math>, in which <math>K</math> is the symmetric [[thermal conductivity]] matrix. The negative is inserted in Fourier's law to reflect the expectation that heat will always flow from hot to cold. In other words, since the temperature gradient <math>\mathbf g</math> always points from cold to hot, the heat flux <math>\mathbf q</math> is expected to have a negative inner product with <math>\mathbf g</math> so that <math>\mathbf{q}^\textsfoperatorname{T}\mathbf{g} < 0</math>. Substituting Fourier's law then gives this expectation as <math>\mathbf{g}^\textsfoperatorname{T}K\mathbf{g} > 0</math>, implying that the conductivity matrix should be positive definite.
 
== See also ==