Content deleted Content added
m Added reference to eigenvalues |
define H in terms of J explicitly |
||
Line 1:
The '''hat matrix''', '''H''', is used in [[statistics]] to relate [[errors]] in [[errors and residuals in statistics|residuals]] to [[observational error|experimental errors]]. Suppose that a [[linear least squares]] problem is being addressed. The model can be written as
:<math>\mathbf{y^{calc}=Jp},</math>
where '''J''' is a matrix of coefficients and '''p''' is a vector of parameters. The solution to the un-weighted least-squares equations is given by
:<math>\mathbf{p=\left(J^\top J \right)^{-1} J^\top y^{obs}}.</math>
The vector of un-weighted residuals, '''r''', is given by
:<math>\mathbf {r=y^{obs}-y^{calc}=y^{obs}-J \left(J^\top J \right)^{-1} J^\top y^{obs}}.</math>
The matrix <math>\mathbf {H = J \left(J^\top J \right)^{-1} J^\top }</math> is known as the hat matrix. Thus, the residuals can be expressed simply as
:<math>\mathbf{r=\left(I-H \right) y^{obs}}.</math>
The hat matrix corresponding to a [[linear model]] is [[symmetric]] and [[idempotent]], that is, <math>\mathbf {HH=H}</math>. However, this is not always the case; for example, the [[local regression|LOESS]] hat matrix is generally not symmetric nor idempotent.
|