Content deleted Content added
Fgnievinski (talk | contribs) |
→Properties: link to proof of the claim |
||
Line 67:
The projection matrix corresponding to a [[linear model]] is [[symmetric matrix|symmetric]] and [[idempotent matrix|idempotent]], that is, <math>\mathbf{P}^2 = \mathbf{P}</math>. However, this is not always the case; in [[local regression|locally weighted scatterplot smoothing (LOESS)]], for example, the hat matrix is in general neither symmetric nor idempotent.
For [[linear models]], the [[trace (linear algebra)|trace]] of the projection matrix is equal to the [[rank (linear algebra)|rank]] of <math>\mathbf{X}</math>, which is the number of independent parameters of the linear model <ref>[https://math.stackexchange.com/questions/1582567/proof-that-trace-of-hat-matrix-in-linear-regression-is-rank-of-x Proof that trace of 'hat' matrix in linear regression is rank of X] (https://math.stackexchange.com)</ref>. For other models such as LOESS that are still linear in the observations <math>\mathbf{y}</math>, the projection matrix can be used to define the [[degrees of freedom (statistics)#Effective degrees of freedom|effective degrees of freedom]] of the model.
Practical applications of the projection matrix in regression analysis include [[Leverage (statistics)|leverage]] and [[Cook's distance]], which are concerned with identifying [[influential observation]]s, i.e. observations which have a large effect on the results of a regression.
|