Content deleted Content added
removed link to explanatory power which is appearantly a philosophical or scientific concept unrelated to statistics.. |
Added short description Tags: Mobile edit Mobile app edit Android app edit App description add |
||
(30 intermediate revisions by 9 users not shown) | |||
Line 1:
{{Short description|Statistics concept}}
In [[statistics]] the '''mean squared prediction error''' ('''MSPE'''), also known as '''mean squared error of the predictions''', of a [[smoothing]]
Knowledge of ''g'' would be required in order to calculate the MSPE exactly; in practice, MSPE is estimated.<ref>{{cite book |first1=Robert S. |last1=Pindyck |authorlink=Robert Pindyck |first2=Daniel L. |last2=Rubinfeld |authorlink2=Daniel L. Rubinfeld |title=Econometric Models & Economic Forecasts |___location=New York |publisher=McGraw-Hill |edition=3rd |year=1991 |isbn=0-07-050098-3 |chapter=Forecasting with Time-Series Models |pages=[https://archive.org/details/econometricmodel00pind/page/516 516–535] |chapter-url=https://archive.org/details/econometricmodel00pind/page/516 }}</ref>
▲In [[statistics]] the '''mean squared prediction error''' of a [[smoothing]] or [[curve fitting]] procedure is the expected value of the squared difference between the fitted values implied by the predictive function <math>\widehat{g}</math> and the values of the (unobservable) function ''g''. It is an inverse measure of the explanatory power of <math>\widehat{g},</math> and can be used in the process of [[cross-validation (statistics)|cross-validation]] of an estimated model.
==Formulation==
If the smoothing or fitting procedure has [[
:<math>\operatorname{
:<math>\operatorname{MSPE}
The MSPE can be decomposed into two terms (just like [[mean squared error]] is decomposed into [[bias]] and [[variance]]); however for MSPE one term is the sum of squared biases of the fitted values and another the sum of variances of the fitted values:▼
▲The MSPE can be decomposed into two terms:
▲:<math>\operatorname{MSPE}(L)=\sum_{i=1}^n\left(\operatorname{E}\left[\widehat{g}(x_i)\right]-g(x_i)\right)^2+\sum_{i=1}^n\operatorname{var}\left[\widehat{g}(x_i)\right].</math>
:<math>\operatorname{MSPE}=\operatorname{ME}^2 + \operatorname{VAR},</math>
:<math>\operatorname{ME}=\operatorname{E}\left[ \widehat{g}(x_i) - g(x_i)\right]</math>
:<math>\operatorname{VAR}=\operatorname{E}\left[\left(\widehat{g}(x_i) - \operatorname{E}\left[{g}(x_i)\right]\right)^2\right].</math>
The quantity {{math|SSPE{{=}}''n''MSPE}} is called '''sum squared prediction error'''.
The '''root mean squared prediction error''' is the square root of MSPE: {{math|RMSPE{{=}}{{sqrt|MSPE}}}}.
==Computation of MSPE over out-of-sample data==
{{Further|Cross-validation (statistics)}}
The mean squared prediction error can be computed exactly in two contexts. First, with a [[sample (statistics)|data sample]] of length ''n'', the [[data analyst]] may run the [[regression analysis|regression]] over only ''q'' of the data points (with ''q'' < ''n''), holding back the other ''n – q'' data points with the specific purpose of using them to compute the estimated model’s MSPE out of sample (i.e., not using data that were used in the model estimation process). Since the regression process is tailored to the ''q'' in-sample points, normally the in-sample MSPE will be smaller than the out-of-sample one computed over the ''n – q'' held-back points. If the increase in the MSPE out of sample compared to in sample is relatively slight, that results in the model being viewed favorably. And if two models are to be compared, the one with the lower MSPE over the ''n – q'' out-of-sample data points is viewed more favorably, regardless of the models’ relative in-sample performances. The out-of-sample MSPE in this context is exact for the out-of-sample data points that it was computed over, but is merely an estimate of the model’s MSPE for the mostly unobserved population from which the data were drawn.
Second, as time goes on more data may become available to the data analyst, and then the MSPE can be computed over these new data.
==Estimation of MSPE over the population==▼
{{disputed|section|date=May 2018|reason=this needs to be checked against a source}}
When the model has been estimated over all available data with none held back, the MSPE of the model over the entire [[statistical population|population]] of mostly unobserved data can be estimated as follows.
▲==Estimation of MSPE==
For the model <math>y_i=g(x_i)+\sigma\varepsilon_i</math> where <math>\varepsilon_i\sim\mathcal{N}(0,1)</math>, one may write
:<math>n\cdot\operatorname{MSPE}(L)=g
:<math>\sum_{i=1}^n\left(\operatorname{E}\left[g(x_i)-\widehat{g}(x_i)\right]
=\operatorname{E}\left[\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2\right]-\sigma^2\operatorname{tr}\left[\left(I-L\right)
Thus,
:<math>n\cdot\operatorname{MSPE}(L)=\operatorname{E}\left[\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2\right]-\sigma^2\left(n-
If <math>\sigma^2</math> is known or well-estimated by <math>\widehat{\sigma}^2</math>, it becomes possible to estimate MSPE by
:<math>n\cdot\operatorname{\widehat{MSPE}}(L)=\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2-\widehat{\sigma}^2\left(n-
[[Colin Mallows]] advocated this method in the construction of his model selection statistic [[Mallows's Cp|''C<sub>p</sub>'']], which is a normalized version of the estimated MSPE:
:<math>C_p=\frac{\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2}{\widehat{\sigma}^2}-n+
where ''p''
That concludes this proof.
==See also==
* [[Akaike information criterion]]
* [[Bias-variance tradeoff]]
* [[Mean squared error]]
* [[Errors and residuals in statistics]]
* [[Law of total variance]]
* [[Mallows's Cp|Mallows's ''C<sub>p</sub>'']]
* [[Model selection]]
== References ==
{{reflist}}
{{Machine learning evaluation metrics}}
{{DEFAULTSORT:Mean Squared Prediction Error}}
|