Content deleted Content added
No edit summary |
(note that <math>\left(X^\top X \right)^{-1} X^\top</math> is the pseudoinverse of X) |
||
Line 20:
Therefore the hat matrix is given by
:<math>H = X \left(X^\top X \right)^{-1} X^\top.</math>
In the language of [[linear algebra]], the hat matrix is the [[orthogonal projection]] onto the [[column space]] of the design matrix ''X''. (Note that <math>\left(X^\top X \right)^{-1} X^\top</math> is the [[Moore–Penrose_pseudoinverse#Full_rank|pseudoinverse of X]].)
The hat matrix corresponding to a [[linear model]] is [[symmetric matrix|symmetric]] and [[idempotent]], that is, ''H''<sup>2</sup> = ''H''. However, this is not always the case; in [[local regression|locally weighted scatterplot smoothing (LOESS)]], for example, the hat matrix is in general neither symmetric nor idempotent.
|