Projection matrix: Difference between revisions

Content deleted Content added
No edit summary
Line 98:
where, e.g., <math>\mathbf{P}[\mathbf{A}] = \mathbf{A} \left(\mathbf{A}^\textsf{T} \mathbf{A} \right)^{-1} \mathbf{A}^\textsf{T}</math> and <math>\mathbf{M}[\mathbf{A}] = \mathbf{I} - \mathbf{P}[\mathbf{A}]</math>.
There are a number of applications of such a decomposition. In the classical application <math>\mathbf{A}</math> is a column of all ones, which allows one to analyze the effects of adding an intercept term to a regression. Another use is in the [[fixed effects model]], where <math>\mathbf{A}</math> is a large [[sparse matrix]] of the dummy variables for the fixed effect terms. One can use this partition to compute the hat matrix of <math>\mathbf{X}</math> without explicitly forming the matrix <math>\mathbf{X}</math>, which might be too large to fit into computer memory.
==History==
The hat matrix was introduced by John Wilder in 1972. An article by Hoaglin, D.C. and Welsch, R.E. (1978) gives the properties of the matrix and also many examples of its application.
 
== See also ==