Content deleted Content added
No edit summary |
|||
Line 1:
In [[statistics]], the '''hat matrix''', '''''H''''', relates the [[fitted
{{Citation| title = The Hat Matrix in Regression and ANOVA
| first1= David C. | last1= Hoaglin |first2= Roy E. | last2=Welsch
Line 13:
Suppose that we wish to solve a [[linear model]] using [[linear least squares]]. The model can be written as
:<math>\mathbf{y} = X \boldsymbol \beta + \boldsymbol \varepsilon,</math>
where ''X'' is a matrix of [[explanatory
== Uncorrelated errors ==
For uncorrelated [[errors and residuals in statistics|errors]], the estimated parameters are
Line 31:
The hat matrix corresponding to a [[linear model]] is [[symmetric matrix|symmetric]] and [[idempotent]], that is, ''H''<sup>2</sup> = ''H''. However, this is not always the case; in [[local regression|locally weighted scatterplot smoothing (LOESS)]], for example, the hat matrix is in general neither symmetric nor idempotent.
The formula for the vector of
:<math>\mathbf{r} = \mathbf{y} - \mathbf{\hat{y}} = \mathbf{y} - H \mathbf{y} = (I - H) \mathbf{y}.</math>
|