Content deleted Content added
(note that <math>\left(X^\top X \right)^{-1} X^\top</math> is the pseudoinverse of X) |
Put primary explanation in new "uncorrelated residuals" section. s/variance-covariance matrix/covariance matrix/, change all symbols for covariance matrix to Σ |
||
Line 14:
:<math>\mathbf{y} = X \boldsymbol \beta + \boldsymbol \varepsilon,</math>
where ''X'' is a matrix of explanatory variables (the [[design matrix]]), '''''β''''' is a vector of unknown parameters to be estimated, and '''''ε''''' is the error vector.
== Uncorrelated residuals ==
:<math>\hat \boldsymbol \beta = \left(X^\top X \right)^{-1} X^\top \mathbf{y},</math>
so the fitted values are
Line 26 ⟶ 27:
The formula for the vector of residuals '''r''' can be expressed compactly using the hat matrix:
:<math>\mathbf{r} = \mathbf{y} - \mathbf{\hat{y}} = \mathbf{y} - H \mathbf{y} = (I - H) \mathbf{y}.</math>
The [[
For the case of linear models with [[independent and identically distributed]] errors in which
For [[linear models]], the [[trace (linear algebra)|trace]] of the hat matrix is equal to the [[rank (linear algebra)|rank]] of ''X'', which is the number of independent parameters of the linear model.
Line 35 ⟶ 36:
==Correlated residuals==
The above may be generalized to the case of correlated residuals. Suppose that the [[covariance matrix]] of the residuals is
:<math> \hat{\boldsymbol{\beta}} = \left(X^\top
the hat matrix is thus
:<math> H = X \left(X^\top
and again it may be seen that ''H''<sup>2</sup> = ''H''.
|