Variance function: Difference between revisions

Content deleted Content added
 
(6 intermediate revisions by 4 users not shown)
Line 1:
{{Short description|Smooth function in statistics}}
{{more citations needed|date=March 2014}}
{{for|variance as a function of space-time separation|Variogram}}
{{Regression bar}}
 
In [[statistics]], the '''variance function''' is a [[smooth function]] whichthat depicts the [[variance]] of a [[random quantity]] as a function of its [[mean]]. The variance function is a measure of [[heteroscedasticity]] and plays a large role in many settings of statistical modelling. It is a main ingredient in the [[generalized linear model]] framework and a tool used in [[non-parametric regression]],<ref name="Muller1">{{cite journal|last=Muller and Zhao|title=On a semi parametric variance function model and a test for heteroscedasticity|journal=The Annals of Statistics|year=1995|volume=23|issue=3|pages=946–967|jstor=2242430|doi=10.1214/aos/1176324630|doi-access=free}}</ref> [[semiparametric regression]]<ref name="Muller1"/> and [[functional data analysis]].<ref>{{cite journal|last=Muller, Stadtmuller and Yao|title=Functional Variance Processes|journal=Journal of the American Statistical Association|year=2006|volume=101|issue=475|pages=1007–1018|jstor=27590778|doi=10.1198/016214506000000186|s2cid=13712496}}</ref> In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a [[smooth function]].
 
== Intuition ==
 
In a regression model setting, the goal is to establish whether or not a relationship exists between a response variable and a set of predictor variables. Further, if a relationship does exist, the goal is then to be able to describe this relationship as best as possible. A main assumption in [[linear regression]] is constant variance or (homoscedasticity), meaning that different response variables have the same variance in their errors, at every predictor level. This assumption works well when the response variable and the predictor variable are jointly Normal, see [[Normalnormal distribution|normal]]. As we will see later, the variance function in the Normal setting, is constant,; however, we must find a way to quantify heteroscedasticity (non-constant variance) in the absence of joint Normality.
 
When it is likely that the response follows a distribution that is a member of the exponential family, a [[generalized linear model]] may be more appropriate to use, and moreover, when we wish not to force a parametric model onto our data, a [[non-parametric regression]] approach can be useful. The importance of being able to model the variance as a function of the mean lies in improved inference (in a parametric setting), and estimation of the regression function in general, for any setting.
Line 83 ⟶ 84:
==== Example – normal ====
 
The [[Normalnormal distribution]] is a special case where the variance function is a constant. Let <math>y \sim N(\mu,\sigma^2)</math> then we put the density function of '''y''' in the form of the exponential family described above:
 
:<math>f(y) = \exp\left(\frac{y\mu - \frac{\mu^2}{2}}{\sigma^2} - \frac{y^2}{2\sigma^2} - \frac{1}{2}\ln{2\pi\sigma^2}\right)
Line 280 ⟶ 281:
:<math>g_v(x) = \operatorname{Var}(Y\mid X=x) =\operatorname{E}[y^2\mid X=x] - \left[\operatorname{E}[y\mid X=x]\right]^2 </math>
 
An example is detailed in the pictures to the right. The goal of the project was to determine (among other things) whether or not the predictor, '''number of years in the major leagues''' (baseball,), had an effect on the response, '''salary''', a player made. An initial scatter plot of the data indicates that there is heteroscedasticity in the data as the variance is not constant at each level of the predictor. Because we can visually detect the non-constant variance, it useful now to plot <math>g_v(x) = \operatorname{Var}(Y\mid X=x) =\operatorname{E}[y^2\mid X=x] - \left[\operatorname{E}[y\mid X=x]\right]^2 </math>, and look to see if the shape is indicative of any known distribution. One can estimate <math>\operatorname{E}[y^2\mid X=x]</math> and <math>\left[\operatorname{E}[y\mid X=x]\right]^2 </math> using a general [[smoothing]] method. The plot of the non-parametric smoothed variance function can give the researcher an idea of the relationship between the variance and the mean. The picture to the right indicates a quadratic relationship between the mean and the variance. As we saw above, the Gamma variance function is quadratic in the mean.
 
== Notes ==