Projection matrix: Difference between revisions

Content deleted Content added
3mta3 (talk | contribs)
No edit summary
Line 1:
In [[statistics]], the '''hat matrix''', '''''H''''', relates the [[fitted valuesvalue]]s to the [[observed valuesvalue]]s. It describes the influence each observed value has on each fitted value.<ref name="Hoaglin1977">
{{Citation| title = The Hat Matrix in Regression and ANOVA
| first1= David C. | last1= Hoaglin |first2= Roy E. | last2=Welsch
Line 13:
Suppose that we wish to solve a [[linear model]] using [[linear least squares]]. The model can be written as
:<math>\mathbf{y} = X \boldsymbol \beta + \boldsymbol \varepsilon,</math>
where ''X'' is a matrix of [[explanatory variablesvariable]]s (the [[design matrix]]), '''''β''''' is a vector of unknown parameters to be estimated, and '''''ε''''' is the error vector.
== Uncorrelated errors ==
For uncorrelated [[errors and residuals in statistics|errors]], the estimated parameters are
Line 31:
The hat matrix corresponding to a [[linear model]] is [[symmetric matrix|symmetric]] and [[idempotent]], that is, ''H''<sup>2</sup> = ''H''. However, this is not always the case; in [[local regression|locally weighted scatterplot smoothing (LOESS)]], for example, the hat matrix is in general neither symmetric nor idempotent.
 
The formula for the vector of residuals[[residual]]s '''r''' can be expressed compactly using the hat matrix:
 
:<math>\mathbf{r} = \mathbf{y} - \mathbf{\hat{y}} = \mathbf{y} - H \mathbf{y} = (I - H) \mathbf{y}.</math>