Content deleted Content added
m \varepsilon |
Add a para on Tr(H) and refer to effective degrees of freedom. |
||
Line 22:
In the language of [[linear algebra]], the hat matrix is the [[orthogonal projection]] onto the [[column space]] of the design matrix '''X'''.
The hat matrix corresponding to a [[linear model]] is [[symmetric matrix|symmetric]] and [[idempotent]], that is, '''H'''<sup>2</sup> = '''H'''. However, this is not always the case; in [[local regression|locally weighted scatterplot smoothing (LOESS)]], for example, the hat matrix is in general neither symmetric nor idempotent.
The formula for the vector of residuals '''r''' can be expressed compactly using the hat matrix:
Line 29:
For the case of linear models with [[independent and identically distributed]] errors in which '''V''' = σ<sup>2</sup>'''I''', this reduces to ('''I''' - '''H''')σ<sup>2</sup><ref name="Hoaglin1977"/>.
For [[linear models]], the [[trace (linear algebra)|trace]] of the hat matrix is equal to the rank of '''X''', which is the number of independent parameters of the linear model.
Some other properties of the hat matrix are summarized in <ref>P. Gans, ''Data Fitting in the Chemical Sciences,'', Wiley, 1992.</ref>▼
For other models such as LOESS that are still linear in the observations '''y''',
the hat matrix can be used to define the ''effective degrees of freedom'' of the model — see [[degrees of freedom (statistics)#Effective degrees of freedom]].
▲Some other properties of the hat matrix are summarized in <ref>P. Gans, ''Data Fitting in the Chemical Sciences,'', Wiley, 1992.</ref>.
== See also ==
|