Mean squared prediction error: Difference between revisions

Content deleted Content added
m clean up
Line 1:
{{Multiple issues|
{{Unreferenced|date=December 2009}}
{{expert-subjectExpert needed|statistics|reason=no source, and notation/definition problems regarding ''L''|date=October 2019}}
}}
In [[statistics]] the '''mean squared prediction error''' or '''mean squared error of the predictions''' of a [[smoothing]] or [[curve fitting]] procedure is the expected value of the squared difference between the fitted values implied by the predictive function <math>\widehat{g}</math> and the values of the (unobservable) function ''g''. It is an inverse measure of the explanatory power of <math>\widehat{g},</math> and can be used in the process of [[cross-validation (statistics)|cross-validation]] of an estimated model.
Line 31:
:<math>n\cdot\operatorname{MSPE}(L)=g^{\text{T}}(I-L)^{\text{T}}(I-L)g+\sigma^2\operatorname{tr}\left[L^{\text{T}} L\right].</math>
 
Using in-sample data values, the first term on the right side is equivalent to
 
:<math>\sum_{i=1}^n\left(\operatorname{E}\left[g(x_i)-\widehat{g}(x_i)\right]\right)^2
=\operatorname{E}\left[\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2\right]-\sigma^2\operatorname{tr}\left[\left(I-L\right)^T\left(I-L\right)\right].</math>
 
Thus,