Projection matrix: Difference between revisions

Content deleted Content added
(note that <math>\left(X^\top X \right)^{-1} X^\top</math> is the pseudoinverse of X)
Put primary explanation in new "uncorrelated residuals" section. s/variance-covariance matrix/covariance matrix/, change all symbols for covariance matrix to Σ
Line 14:
:<math>\mathbf{y} = X \boldsymbol \beta + \boldsymbol \varepsilon,</math>
where ''X'' is a matrix of explanatory variables (the [[design matrix]]), '''''β''''' is a vector of unknown parameters to be estimated, and '''''ε''''' is the error vector.
== Uncorrelated residuals ==
TheFor uncorrelated residuals, the estimated parameters are
:<math>\hat \boldsymbol \beta = \left(X^\top X \right)^{-1} X^\top \mathbf{y},</math>
so the fitted values are
Line 26 ⟶ 27:
The formula for the vector of residuals '''r''' can be expressed compactly using the hat matrix:
:<math>\mathbf{r} = \mathbf{y} - \mathbf{\hat{y}} = \mathbf{y} - H \mathbf{y} = (I - H) \mathbf{y}.</math>
The [[variance-covariance matrix]] of the residuals is therefore, by [[error propagation]], equal to <math>\left(I-H \right)^\top V\Sigma\left(I-H \right) </math>, where ''V''&Sigma; is the variance-covariance matrix of the errors (and by extension, the observations as well).
For the case of linear models with [[independent and identically distributed]] errors in which ''V''&Sigma; = ''σ''<sup>2</sup>''I'', this reduces to (''I''&nbsp;&minus;&nbsp;''H'')''σ''<sup>2</sup><ref name="Hoaglin1977"/>.
 
For [[linear models]], the [[trace (linear algebra)|trace]] of the hat matrix is equal to the [[rank (linear algebra)|rank]] of ''X'', which is the number of independent parameters of the linear model.
Line 35 ⟶ 36:
 
==Correlated residuals==
The above may be generalized to the case of correlated residuals. Suppose that the [[covariance matrix]] of the residuals is ''A''&Sigma;. Then since
 
:<math> \hat{\boldsymbol{\beta}} = \left(X^\top A\Sigma^{-1} X \right)^{-1} X^\top A\Sigma^{-1}\,\mathbf{y}, </math>
 
the hat matrix is thus
 
:<math> H = X \left(X^\top A\Sigma^{-1} X\right)^{-1} X^\top A\Sigma^{-1}, </math>
 
and again it may be seen that ''H''<sup>2</sup> = ''H''.