Content deleted Content added
Maylingoed (talk | contribs) m Reverted edits by 128.84.126.234 (talk): disruptive edits (HG) (3.4.12) |
Tags: Reverted references removed Visual edit: Switched |
||
Line 183:
With this in mind, the one-to-one change of variable <math>\mathbf{y} = P\mathbf{z}</math> shows that <math>\mathbf{z}^* M\mathbf{z}</math> is real and positive for any complex vector <math>\mathbf z</math> if and only if <math>\mathbf{y}^* D\mathbf{y}</math> is real and positive for any <math>y</math>; in other words, if <math>D</math> is positive definite. For a diagonal matrix, this is true only if each element of the main diagonal—that is, every eigenvalue of <math>M</math>—is positive. Since the [[spectral theorem]] guarantees all eigenvalues of a Hermitian matrix to be real, the positivity of eigenvalues can be checked using [[Descartes' rule of signs|Descartes' rule of alternating signs]] when the [[characteristic polynomial]] of a real, symmetric matrix <math>M</math> is available.
==Which matrices are covariance matrices?==
From the identity just above, let <math>\mathbf{b}</math> be a <math>(p \times 1)</math> real-valued vector, then
:<math>\operatorname{var}(\mathbf{b}^{\rm T}\mathbf{X}) = \mathbf{b}^{\rm T} \operatorname{var}(\mathbf{X}) \mathbf{b},\,</math>
which must always be nonnegative, since it is the [[variance#Properties|variance]] of a real-valued random variable, so a covariance matrix is always a [[positive-semidefinite matrix]].
The above argument can be expanded as follows:<math display="block">
\begin{align}
& w^{\rm T} \operatorname{E} \left[(\mathbf{X} - \operatorname{E}[\mathbf{X}]) (\mathbf{X} - \operatorname{E}[\mathbf{X}])^{\rm T}\right] w
= \operatorname{E} \left[w^{\rm T}(\mathbf{X} - \operatorname{E}[\mathbf{X}]) (\mathbf{X} - \operatorname{E}[\mathbf{X}])^{\rm T}w\right] \\
&= \operatorname{E} \big[\big( w^{\rm T}(\mathbf{X} - \operatorname{E}[\mathbf{X}]) \big)^2 \big] \geq 0,
\end{align}
</math>where the last inequality follows from the observation that <math>w^{\rm T}(\mathbf{X} - \operatorname{E}[\mathbf{X}])</math> is a scalar.
Conversely, every symmetric positive semi-definite matrix is a covariance matrix. To see this, suppose <math>M</math> is a <math>p \times p</math> symmetric positive-semidefinite matrix. From the finite-dimensional case of the [[spectral theorem]], it follows that <math>M</math> has a nonnegative symmetric [[Square root of a matrix|square root]], which can be denoted by '''M'''<sup>1/2</sup>. Let <math>\mathbf{X}</math> be any <math>p \times 1</math> column vector-valued random variable whose covariance matrix is the <math>p \times p</math> identity matrix. Then
:<math>\operatorname{var}(\mathbf{M}^{1/2} \mathbf{X}) = \mathbf{M}^{1/2} \, \operatorname{var}(\mathbf{X}) \, \mathbf{M}^{1/2} = \mathbf{M}.</math>
==Decomposition==
Line 196 ⟶ 216:
<math>M</math> is positive definite if and only if such a decomposition exists with <math>B</math> [[Invertible matrix|invertible]].
More generally, <math>M</math> is positive semidefinite with rank <math>k</math> if and only if a decomposition exists with a <math>k \times n</math> matrix <math>B</math> of full row rank (i.e. of rank <math>k</math>).
Moreover, for any decomposition <math>M = B^* B</math>, <math>\operatorname{rank}(M) = \operatorname{rank}(B)</math>.
{{math proof | proof =
Line 217 ⟶ 237:
In other words, a Hermitian matrix <math>M</math> is positive semidefinite if and only if it is the [[Gram matrix]] of some vectors <math>b_1,\dots,b_n</math>.
It is positive definite if and only if it is the Gram matrix of some [[linearly independent]] vectors.
In general, the rank of the Gram matrix of vectors <math>b_1,\dots,b_n</math> equals the dimension of the space [[Linear span|spanned]] by these vectors.
== Other characterizations ==
|