Conditional variance: Difference between revisions

Content deleted Content added
WikiSzepi (talk | contribs)
Removed underscore X from the main formulate that was introduced by EtudiantEco @ 02:26, 3 February 2015‎ (similar to as in other article). Again: The underscore X notation is not more precise, but is rather confusing and unnecessary.
WikiSzepi (talk | contribs)
No edit summary
Line 1:
In [[probability theory]] and [[statistics]], a '''conditional variance''' is the [[variance]] of a [[conditionalrandom probability distributionvariable]]. That is, it isgiven the variance of a [[random variable]] given the value(s) of one or more other variables.
Particularly in [[econometrics]], the conditional variance is also known as the '''scedastic function''' or '''skedastic function'''.<ref>{{cite book |first=Aris |last=Spanos |chapter=Conditioning and regression |title=Probability Theory and Statistical Inference |___location=New York |publisher=Cambridge University Press |year=1999 |isbn=0-521-42408-9 |pages=339–356 [p. 342] |url=https://books.google.com/books?id=G0_HxBubGAwC&pg=PA342 }}</ref> Conditional variances are important parts of [[autoregressive conditional heteroskedasticity]] (ARCH) models.
 
==Definition==
The conditional variance of a [[random variable]] ''Y'' given that the value of aanother random variable ''X'' takesis the value ''x'' is
 
:<math>\operatorname{Var}(Y|X) = \operatorname{E}((Y - \operatorname{E}(Y\mid X))^{2}\mid X).</math>
 
The conditional variance tells us how much variance is left if we use <math>\operatorname{E}(Y\mid X)</math> to "predict" ''Y''.
Here, as usual, <math>\operatorname{E}(Y\mid X)</math> stands for the [[conditional expectation]] of ''Y'' given ''X'',
which we may recall, is a random variable itself (a function of ''X'', determined up to probability one).
As a result, <math>\operatorname{Var}(Y|X)</math> itself is a random variable (and is a function of ''X'').
 
==Explanation, relation to [[least-squares]]==
Recall that variance is the expected squared deviation between a random variable (say, ''Y'') and its expected value.
The expected value can be thought of as a reasonable prediction of the outcomes of the random experiment (in particular, the expected value is the best constant prediction when predictions are assessed by expected squared prediction error). Thus, one interpretation of variance is that it gives the smallest possible expected squared prediction error. If we have the knowledge of another random variable (''X'') that we can use to predict ''Y'', we can potentially use this knowledge to reduce the expected squared error. As it turns out, the best prediction of ''Y'' given ''X'' is the conditional expectation. In particular, for any <math>f: \mathbb{R} \to \mathbb{R}</math> measurable,
 
:<math>
\begin{align}
\operatorname{E}[ (Y-f(X))^2 ]
&= \operatorname{E}[ (Y-\operatorname{E}(Y|X)\,\,+\,\, \operatorname{E}(Y|X)-f(X) )^2 ] \\
&= \operatorname{E}[ \operatorname{E}\{ (Y-\operatorname{E}(Y|X)\,\,+\,\, \operatorname{E}(Y|X)-f(X) )^2|X\} ] \\
&= \operatorname{E}[\operatorname{Var}( Y| X )] + \operatorname{E}[\operatorname{E}(Y|X)-f(X))^2]\,.
\end{align}
</math>
 
By selecting <math>f(X)=\operatorname{E}(Y|X)</math>, the second, nonnegative term becomes zero, showing the claim.
Here, the second equality used the [[Law of total expectation|law of total expectation]].
We also see that the expected conditional variance of ''Y'' given ''X'' shows up as the irreducible error of predicting ''Y'' given only the knowledge of ''X''.
 
==Special cases, variations==
===Conditioning on discrete random variables===
When ''X'' takes on countable many values <math>S = \{x_1,x_1,\dots\}</math> with positive probability, i.e., it is a [[discrete random variable]], we can introduce <math>\operatorname{Var}(Y|X=x)</math>, the conditional variance of ''Y'' given that ''X=x'' for any ''x'' from ''S'' as follows:
 
:<math>\operatorname{Var}(Y|X=x) = \operatorname{E}((Y - \operatorname{E}(Y\mid X=x))^{2}\mid X=x),</math>
 
where recall that <math>\operatorname{E}(Z\mid X=x)</math> is the [[Conditional_expectation#Conditional_expectation_with_respect_to_a_random_variable|conditional expectation]], i.e. the [[expectation operator]] with respect to the [[conditional distribution]] of ''YZ'' given that the ''X'' takes the value ''=x''.]], Anwhich alternativeis notationwell-defined for this is :<math>x\operatorname{Var}_{Y\midin X}(Y|x).S</math>.
An alternative notation for <math>\operatorname{Var}(Y|X=x)</math> is <math>\operatorname{Var}_{Y\mid X}(Y|x).</math>
 
Note that here <math>\operatorname{Var}(Y|X=x)</math> defines a constant for possible values of ''x'', and in particular, <math>\operatorname{Var}(Y|X=x)</math>, is ''not'' a random variable.
 
The connection of this definition to <math>\operatorname{Var}(Y|X)</math> is as follows:
Let ''S'' be as above and define the function <math>v: S \to \mathbb{R}</math> as <math>v(x) = \operatorname{Var}(Y|X=x)</math>. Then, <math>v(X) = \operatorname{Var}(Y|X)</math> [[almost surely]].
 
===Definition using conditional distributions===
The "conditional expectation of ''Y'' given ''X=x''" can also be defined more generally
using the [[conditional distribution]] of ''Y'' given ''X'' (this exists in this case, as both here ''X'' and ''Y'' are real-valued).
 
In particular, letting <math>P_{Y|X}</math> be the (regular) [[conditional distribution]] <math>P_{Y|X}</math> of ''Y'' given ''X'', i.e., <math>P_{Y|X}:\Omega \times \mathbb{R}\to [0,1]</math> (the intention is that <math>P_{Y|X}(U,x) = P(Y\in U|X=x)</math> almost surely over the support of ''X''), we can define
 
<math> \operatorname{Var}(Y|X=x) = \int (y- \int y' P_{Y|X}(dy'|x))^2 P_{Y|X}(dy|x). </math>
 
TheThis abovecan, mayof course, be statedspecialized into when ''Y'' is discrete itself (replacing the alternativeintegrals formwith thatsums), basedand onalso when the [[conditional distributiondensity]] of ''Y'' given that the ''X'' takes the value ''=x'', thewith conditionalrespect varianceto issome theunderlying [[variance]]distribution of this [[probability distribution]]exists.
 
==Components of variance==
The [[law of total variance]] says
 
:<math>\operatorname{Var}(Y) = \operatorname{E}(\operatorname{Var}(Y\mid X))+\operatorname{Var}(\operatorname{E}(Y\mid X)),.</math>
 
In words: the variance of ''Y'' is the sum of the expected conditional variance ''Y'' given ''X'' and the variance of the conditional expectation of ''Y'' given ''X''. The first term captures the variation left after "using ''X'' to predict ''Y''", while the second term captures the variation due to the mean of the prediction of ''Y'' due to the randomness of ''X''.
where, for example, <math>\operatorname{Var}(Y|X)</math> is understood to mean that the value ''x'' at which the conditional variance would be evaluated is allowed to be a [[random variable]], ''X''. In this "law", the inner expectation or variance is taken with respect to ''Y'' conditional on ''X'', while the outer expectation or variance is taken with respect to ''X''. This expression represents the overall variance of ''Y'' as the sum of two components, involving a prediction of ''Y'' based on ''X''. Specifically, let the predictor be the least-mean-squares prediction based on ''X'', which is the [[conditional expectation]] of ''Y'' given ''X''. Then the two components are:
:*the average of the variance of ''Y'' about the prediction based on ''X'', as ''X'' varies;
:*the variance of the prediction based on ''X'', as ''X'' varies.
 
==References==