Projection matrix

This is an old revision of this page, as edited by Petergans (talk | contribs) at 08:22, 4 February 2008 (Added reference to eigenvalues). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The hat matrix, H, is used in statistics to relate errors in residuals to experimental errors. Suppose that a linear least squares problem is being addressed. The model can be written as

where J is a matrix of coefficients and p is a vector of parameters. The solution to the un-weighted least-squares equations is given by

The vector of un-weighted residuals, r, is given by

The matrix is known as the hat matrix. Thus, the residuals can be expressed simply as

The hat matrix corresponding to a linear model is symmetric and idempotent, that is, . However, this is not always the case; for example, the LOESS hat matrix is generally not symmetric nor idempotent.

The variance-covariance matrix of the residuals is, by error propagation, equal to , where M is the variance-covariance matrix of the errors (and by extension, the observations as well). Thus, the residual sum of squares is a quadratic form in the observations.

The eigenvalues of an idempotent matrix are equal to 1 or 0.[1] Some other useful properties of the hat matrix are summarized in [2]

See also

Studentized residuals

References

  1. ^ C. B. Read, Encyclopedia of Statistical Sciences, Idempotent Matrices, Wiley, 2006
  2. ^ P. Gans, Data Fitting in the Chemical Sciences,, Wiley, 1992.