Simple linear regression: Difference between revisions

Content deleted Content added
 
(3 intermediate revisions by 2 users not shown)
Line 11:
==Formulation and computation==
Consider the [[mathematical model|model]] function
: <math display="block"> y = \alpha + \beta x,</math>
which describes a line with slope {{mvar|β}} and {{mvar|y}}-intercept {{mvar|α}}. In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation the [[errors and residuals|errors]]. Suppose we observe {{mvar|n}} data pairs and call them {{math|{(''x''<sub>''i''</sub>, ''y''<sub>''i''</sub>), ''i'' {{=}} 1, ..., ''n''}}}. We can describe the underlying relationship between {{math|''y''<sub>''i''</sub>}} and {{math|''x''<sub>''i''</sub>}} involving this error term {{math|''ε''<sub>''i''</sub>}} by
 
: <math display="block"> y_i = \alpha + \beta x_i + \varepsilon_i.</math>
 
This relationship between the true (but unobserved) underlying parameters {{mvar|α}} and {{mvar|β}} and the data points is called a linear regression model.
Line 20:
The goal is to find estimated values <math>\widehat\alpha</math> and <math>\widehat\beta</math> for the parameters {{mvar|α}} and {{mvar|β}} which would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in the [[Ordinary least squares|least-squares]] approach: a line that minimizes the [[residual sum of squares|sum of squared residuals]] (see also [[Errors and residuals]]) <math>\widehat\varepsilon_i</math> (differences between actual and predicted values of the dependent variable ''y''), each of which is given by, for any candidate parameter values <math>\alpha</math> and <math>\beta</math>,
 
:<math display="block">\widehat\varepsilon_i =y_i-\alpha -\beta x_i.</math>
 
In other words, <math>\widehat\alpha</math> and <math>\widehat\beta</math> solve the following [[minimization problem]]:
 
: <math display="block">(\hat\alpha,\, \hat\beta) = \operatorname{argmin}\left( Q(\alpha, \beta) \right),</math>
where the [[objective function]] {{mvar|Q}} is:
: <math display="block">Q(\alpha, \beta) = \sum_{i=1}^n\widehat\varepsilon_i^{\,2} = \sum_{i=1}^n (y_i -\alpha - \beta x_i)^2\ .</math>
 
By expanding to get a quadratic expression in <math>\alpha</math> and <math>\beta,</math> we can derive minimizing values of the function arguments, denoted <math>\widehat{\alpha}</math> and <math>\widehat{\beta}</math>:<ref>Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in ''Mathematics of Statistics'', Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252–285</ref>
 
<math display="block">\begin{align}
\widehat\alpha & = \bar{y} - ( \widehat\beta\,\bar{x}), \\[5pt]
\widehat\beta &= \frac{ \sum_{i=1}^n \left(x_i - \bar{x}\right) \left(y_i - \bar{y}\right) }{ \sum_{i=1}^n \left(x_i - \bar{x}\right)^2 }
= \frac{ \sum_{i=1}^n \Delta x_i \Delta y_i }{ \sum_{i=1}^n \Delta x_i^2 }
\end{align}</math>
Line 45:
The above equations are efficient to use if the mean of the x and y variables (<math>\bar{x} \text{ and } \bar{y}</math>) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of the <math>\widehat\alpha\text{ and }\widehat\beta</math> equations. These expanded equations may be derived from the more general [[polynomial regression]] equations<ref name=":1" /><ref>{{Cite web |title=Mathematics of Polynomial Regression |url=http://polynomialregression.drque.net/math.html |website=Polynomial Regression, A PHP regression class}}</ref> by defining the regression polynomial to be of order 1, as follows.
 
<math display="block">\begin{bmatrix}
n & \sum_{i=1 }^nx_in x_i \\ [1ex]
\sum_{i=1}^nx_in x_i & \sum_{i=1}^nx_in x_i^{2}
\end{bmatrix}
 
\begin{bmatrix}
\widehat\alpha \\[1ex]
\widehat\beta
\end{bmatrix}
=
\begin{bmatrix}
\sum_{ i=1 }^ny_in y_i \\[1ex]
\sum_{ i=1 }^ny_ix_in y_i x_i
\end{bmatrix}
</math>
Line 63:
The above [[system of linear equations]] may be solved directly, or stand-alone equations for <math>\widehat\alpha\text{ and }\widehat\beta</math> may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof.<ref>{{Cite web |title=Numeracy, Maths and Statistics - Academic Skills Kit, Newcastle University |url=https://www.ncl.ac.uk/webtemplate/ask-assets/external/maths-resources/statistics/regression-and-correlation/simple-linear-regression.html |access-date=30 Jan 2024 |website=Simple Linear Regression}}</ref><ref name=":1">{{Cite web |last=Muthukrishnan |first=Gowri |date=17 Jun 2018 |title=Maths behind Polynomial regression, Muthukrishnan |url=https://muthu.co/maths-behind-polynomial-regression/ |access-date=30 Jan 2024 |website=Maths behind Polynomial regression}}</ref>
 
<math display="block">\begin{align}
\widehat\alpha &= \frac{\sum_sum\limits_{i=1}^n y_i ny_i\sum_sum\limits_{i=1}^n nxx_i^2_i2 - \sum_sum\limits_{i=1}^n nx_ix_i \sum\sum_limits_{i=1}^n x_i nx_iy_iy_i }{n \sum\sum_limits_{i=1}^n nxx_i^2_i2 - \left(\sum_sum\limits_{i=1}^n nx_ix_i\right)^2 }
&\qquad
\\[2ex]
\widehat\alpha = \frac{\sum_{i=1}^ ny_i\sum_{i=1}^ nx^2_i - \sum_{i=1}^ nx_i\sum_{i=1}^ nx_iy_i }{n\sum_{i=1}^ nx^2_i-(\sum_{i=1}^ nx_i)^2 }
\\
\\&\qquad
\widehat\beta
&= \frac {n \sum_sum\limits_{i=1}^n x_iy_ix_i y_i - \sum\sum_limits_{i=1}^n x_i \sum\sum_limits_{ i=1}^n y_i }{ n \sum\sum_limits_{i=1}^n x_i^2 - \left(\sum_sum\limits_{i=1}^nx_in x_i\right)^2 }
 
\\ &\qquad
\end{align}</math>
 
Line 91 ⟶ 87:
Substituting the above expressions for <math>\widehat{\alpha}</math> and <math>\widehat{\beta}</math> into the original solution yields
 
: <math display="block">\frac{ y - \bar{y}}{s_y} = r_{xy} \frac{ x - \bar{x}}{s_x} .</math>
 
This shows that {{math|''r''<sub>''xy''</sub>}} is the slope of the regression line of the [[Standard score|standardized]] data points (and that this line passes through the origin). Since <math>-1 \leq r_{xy} \leq 1</math> then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known as [[Regression_toward_the_mean#Definition_for_simple_linear_regression_of_data_points|regressions toward the mean]].
Line 97 ⟶ 93:
Generalizing the <math>\bar x</math> notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example:
 
:<math display="block">\overline{xy} = \frac{1}{n} \sum_{i=1}^n x_i y_i.</math>
 
This notation allows us a concise formula for {{math|''r''<sub>''xy''</sub>}}:
 
:<math display="block">r_{xy} = \frac{ \overline{xy} - \bar{x}\bar{y} }{ \sqrt{ \left(\overline{x^2} - \bar{x}^2\right)\left(\overline{y^2} - \bar{y}^2\right)} } .</math>
 
The [[coefficient of determination]] ("R squared") is equal to <math>r_{xy}^2</math> when the model is linear with a single independent variable. See [[Correlation#Pearson's product-moment coefficient|sample correlation coefficient]] for additional details.
 
=== Interpretation about the slope ===
By multiplying all members of the summation in the numerator by : <math>\begin{align}\frac{(x_i - \bar{x})}{(x_i - \bar{x})} = 1\end{align}</math> (thereby not changing it):
 
: <math display="block">\begin{align}
\widehat\beta &= \frac{ \sum_{i=1}^n \left(x_i - \bar{x}\right) \left(y_i - \bar{y}\right) }{ \sum_{i=1}^n \left(x_i - \bar{x}\right)^2 } \\[1ex]
&= \frac{ \sum_{i=1}^n \left(x_i - \bar{x}\right)^2 \frac{(y_i - \bar{y})}{(x_i - \bar{x})} }{ \sum_{i=1}^n \left(x_i - \bar{x}\right)^2 } \\[1ex]
&= \sum_{i=1}^n \frac{ \left(x_i - \bar{x}\right)^2}{ \sum_{j=1}^n \left(x_j - \bar{x}\right)^2 } \frac{(y_i - \bar{y})}{(x_i - \bar{x})} \\[6pt]
\end{align}</math>
 
We can see that the slope (tangent of angle) of the regression line is the weighted average of <math>\frac{(y_i - \bar{y})}{(x_i - \bar{x})}</math> that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by <math>(x_i - \bar{x})^2</math> because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more.
 
=== Interpretation about the intercept ===
 
: <math display="block">\begin{align}
\widehat\alpha & = \bar{y} - \widehat\beta\,\bar{x}, \\[5pt]
\end{align}</math>
 
Given <math>\widehat\beta = \tan(\theta) = dy / dx \rightarrow dy = dx\times\widehat\beta \, dx</math> with <math>\theta</math> the angle the line makes with the positive x axis,
we have <math>y_{\rm intersection} = \bar{y} - dx\times\widehat\beta \, dx = \bar{y} - dy</math>{{Clarify|date=April 2025|reason=This does not seem to make sense. y_{\rm intersection} is supposed to be the same as \alpha, right? But dx is not the same as \bar{x}!|text= |pre-text=remove or}}
 
=== Interpretation about the correlation===
Line 134 ⟶ 131:
| The regression line goes through the ''center of mass'' point, <math>(\bar x,\, \bar y)</math>, if the model includes an intercept term (i.e., not forced through the origin).
| The sum of the residuals is zero if the model includes an intercept term:
: <math display="block">\sum_{i=1}^n \widehat{\varepsilon}_i = 0.</math>
| The residuals and {{mvar|x}} values are uncorrelated (whether or not there is an intercept term in the model), meaning:
: <math display="block">\sum_{i=1}^n x_i \widehat{\varepsilon}_i \;=\; 0</math>
| The relationship between <math>\rho_{xy}</math> (the [[Pearson_correlation_coefficient#For_a_population|correlation coefficient for the population]]) and the population variances of <math>y</math> (<math>\sigma_y^2</math>) and the error term of <math>\epsilonvarepsilon</math> (<math>\sigma_\epsilonvarepsilon^2</math>) is:<ref name = "Valliant2013">Valliant, Richard, Jill A. Dever, and Frauke Kreuter. Practical tools for designing and weighting survey samples. New York: Springer, 2013.</ref>{{rp|401}}
: <math display="block">\sigma_\epsilonvarepsilon^2 = (1-\rho_{xy}^2)\sigma_y^2</math>
For extreme values of <math>\rho_{xy}</math> this is self evident. Since when <math>\rho_{xy} = 0</math> then <math>\sigma_\epsilonvarepsilon^2 = \sigma_y^2</math>. And when <math>\rho_{xy} = 1</math> then <math>\sigma_\epsilonvarepsilon^2 = 0</math>.
}}
 
Line 153 ⟶ 150:
Since the data in this context is defined to be (''x'', ''y'') pairs for every observation, the ''mean response'' at a given value of ''x'', say ''x<sub>d</sub>'', is an estimate of the mean of the ''y'' values in the population at the ''x'' value of ''x<sub>d</sub>'', that is <math>\hat{E}(y \mid x_d) \equiv\hat{y}_d\!</math>. The variance of the mean response is given by:<ref>{{cite book|title = Applied Regression Analysis|edition = 3rd|last1 = Draper |first1 = N. R. |last2 = Smith |first2 = H.|publisher = John Wiley|year = 1998|isbn = 0-471-17082-8}}</ref>
 
: <math display="block">\operatorname{Var}\left(\hat{\alpha} + \hat{\beta}x_d\right) = \operatorname{Var}\left(\hat{\alpha}\right) + \left(\operatorname{Var} \hat{\beta}\right)x_d^2 + 2 x_d \operatorname{Cov} \left(\hat{\alpha}, \hat{\beta} \right) .</math>
 
This expression can be simplified to
 
:<math display="block">\operatorname{Var}\left(\hat{\alpha} + \hat{\beta}x_d\right) =\sigma^2\left(\frac{1}{m} + \frac{\left(x_d - \bar{x}\right)^2}{\sum (x_i - \bar{x})^2}\right),</math>
 
where ''m'' is the number of data points.
Line 163 ⟶ 160:
To demonstrate this simplification, one can make use of the identity
 
: <math display="block">\sumsum_i (x_i - \bar{x})^2 = \sumsum_i x_i^2 - \frac 1 m \left(\sumsum_i x_i\right)^2 .</math>
 
=== Variance of the predicted response ===
Line 170 ⟶ 167:
The ''predicted response'' distribution is the predicted distribution of the residuals at the given point ''x<sub>d</sub>''. So the variance is given by
 
: <math display="block">
\begin{align}
\operatorname{Var}\left(y_d - \left[\hat{\alpha} + \hat{\beta} x_d \right] \right) &= \operatorname{Var} (y_d) + \operatorname{Var} \left(\hat{\alpha} + \hat{\beta}x_d\right) - 2\operatorname{Cov}\left(y_d,\left[\hat{\alpha} + \hat{\beta} x_d \right]\right)\\
Line 181 ⟶ 178:
Since <math>\operatorname{Var}(y_d)=\sigma^2</math> (a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by
 
: <math display="block">
\begin{align}
\operatorname{Var}\left(y_d - \left[\hat{\alpha} + \hat{\beta} x_d \right] \right) & = \sigma^2 + \sigma^2\left(\frac 1 m + \frac{\left(x_d - \bar{x}\right)^2}{\sum (x_i - \bar{x})^2}\right)\\[4pt]
Line 187 ⟶ 184:
\end{align}
</math>
 
 
===Confidence intervals===
Line 200 ⟶ 196:
 
====Normality assumption====
Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean {{mvar|β}} and variance <math style="height:1.5em" display="inline">\sigma^2\left/\sumsum_i(x_i - \bar{x})^2\right.,</math> where {{math|''σ''<sup>2</sup>}} is the variance of the error terms (see [[Proofs involving ordinary least squares]]). At the same time the sum of squared residuals {{mvar|Q}} is distributed proportionally to {{math|[[chi-squared distribution|''χ''<sup>2</sup>]]}} with {{math|''n'' − 2}} degrees of freedom, and independently from <math>\widehat{\beta}</math>. This allows us to construct a {{mvar|t}}-value
 
: <math display="block">t = \frac{\widehat\beta - \beta}{s_{\widehat\beta}}\ \sim\ t_{n - 2},</math>
 
where
 
: <math display="block"> s_\widehat{\beta} = \sqrt{ \frac{\frac{1}{n - 2}\sum_{i=1}^n \widehat{\varepsilon}_i^{\,2}} {\sum_{i=1}^n (x_i -\bar{x})^2} }</math>
 
is the unbiased ''standard error'' estimator of the estimator <math>\widehat{\beta}</math>.
Line 212 ⟶ 208:
This {{mvar|t}}-value has a [[Student's t-distribution|Student's {{mvar|t}}]]-distribution with {{math|''n'' − 2}} degrees of freedom. Using it we can construct a confidence interval for {{mvar|β}}:
 
: <math display="block"> \beta \in \left[\widehat\beta - s_{\widehat\beta} t^*_{n - 2},\ \widehat\beta + s_{\widehat\beta} t^*_{n - 2}\right],</math>
 
at confidence level {{math|(1 − ''γ'')}}, where <math>t^*_{n - 2}</math> is the <math>\scriptstyle \left(1 \;-\; \frac{\gamma}{2}\right)\text{-th}</math> quantile of the {{math|''t''<sub>''n''−2</sub>}} distribution. For example, if {{math|''γ'' {{=}} 0.05}} then the confidence level is 95%.
Line 218 ⟶ 214:
Similarly, the confidence interval for the intercept coefficient {{mvar|α}} is given by
 
: <math display="block">\alpha \in \left[ \widehat\alpha - s_{\widehat\alpha} t^*_{n - 2},\ \widehat\alpha + s_\widehat{\alpha} t^*_{n - 2}\right],</math>
 
at confidence level (1 − ''γ''), where
 
: <math display="block">s_{\widehat\alpha} = s_\widehat{\beta}\sqrt{\frac{1}{n} \sum_{i=1}^n x_i^2} = \sqrt{\frac{1}{n(n - 2)} \left(\sum_{i=1}^n \widehat{\varepsilon}_i^{\,2} \right) \frac{\sum_{i=1}^n x_i^2} {\sum_{i=1}^n (x_i - \bar{x})^2} }</math>
 
[[Image:Okuns law with confidence bands.svg|thumb|300px|The US "changes in unemployment – GDP growth" regression with the 95% confidence bands.]]
The confidence intervals for {{mvar|α}} and {{mvar|β}} give us the general idea where these regression coefficients are most likely to be. For example, in the [[Okun's law]] regression shown here the point estimates are
 
: <math display="block">\widehat{\alpha} = 0.859, \qquad \widehat{\beta} = -1.817.</math>
 
The 95% confidence intervals for these estimates are
 
: <math display="block">\alpha \in \left[\,0.76, 0.96\right], \qquad \beta \in \left[-2.06, -1.58 \,\right].</math>
 
In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown<ref>Casella, G. and Berger, R. L. (2002), "Statistical Inference" (2nd Edition), Cengage, {{ISBN|978-0-534-24312-8}}, pp. 558–559.</ref> that at confidence level (1&nbsp;−&nbsp;''γ'') the confidence band has hyperbolic form given by the equation
 
: <math display="block">(\alpha + \beta \xi) \in \left[ \,\widehat{\alpha} + \widehat{\beta} \xi \pm t^*_{n - 2} \sqrt{ \left(\frac{1}{n - 2} \sum\widehat{\varepsilon}_i^{\,2} \right) \cdot \left(\frac{1}{n} + \frac{(\xi - \bar{x})^2}{\sum(x_i - \bar{x})^2}\right)}\,\right].</math>
 
When the model assumed the intercept is fixed and equal to 0 (<math>\alpha = 0</math>), the standard error of the slope turns into:
 
: <math display="block"> s_\widehat{\beta} = \sqrt{ \frac{1}{n - 1} \frac{\sum_{i=1}^n \widehat{\varepsilon}_i^{\,2}} {\sum_{i=1}^n x_i^2} }</math>
 
With: <math> \hat{\varepsilon}_i = y_i - \hat y_i</math>
Line 251 ⟶ 247:
This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the [[Ordinary least squares|OLS]] article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead.
 
:{|class="wikitable" style="text-align:right; margin-left:1.5em;"
|-
! style="text-align:left;" | Height (m), ''x<sub>i</sub>''
Line 296 ⟶ 292:
There are ''n'' = 15 points in this data set. Hand calculations would be started by finding the following five sums:
 
: <math display="block">\begin{align}
S_{x} &= \sumsum_i x_i \, = 24.76, &\qquad S_{y} &= \sumsum_i y_i \, = 931.17, \\[5pt]
S_{xx} &= \sumsum_i x_i^2 = 41.0532, &\;\;\, S_{yy} &= \sumsum_i y_i^2 = 58498.5439, \\[5pt]
S_{xy} &= \sumsum_i x_iy_i = 1548.2453 &
\end{align}</math>
 
These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors.
 
: <math display="block">\begin{align}
\widehat\beta &= \frac{nS_{xy} - S_xS_y}{nS_{xx} - S_x^2} = 61.272 \\[8pt]
\widehat\alpha &= \frac{1}{n}S_y - \widehat{\beta} \frac{1}{n}S_x = -39.062 \\[8pt]
Line 315 ⟶ 311:
The 0.975 quantile of Student's ''t''-distribution with 13 degrees of freedom is {{math|''t''{{sup|*}}<sub style{{=}}"position:relative; left:-0.3em;">13</sub> {{=}} 2.1604}}, and thus the 95% confidence intervals for {{mvar|α}} and {{mvar|β}} are
 
: <math display="block">\begin{align}
& \alpha \in [\,\widehat{\alpha} \mp t^*_{13} s_\widehat{\alpha} \,] = [\,{-45.4},\ {-32.7}\,] \\[5pt]
& \beta \in [\,\widehat{\beta} \mp t^*_{13} s_\widehat{\beta} \,] = [\, 57.4,\ 65.1 \,]
Line 322 ⟶ 318:
The [[Pearson product-moment correlation coefficient|product-moment correlation coefficient]] might also be calculated:
 
: <math display="block">\widehat{r} = \frac{nS_{xy} - S_xS_y}{\sqrt{(nS_{xx} - S_x^2)(nS_{yy} - S_y^2)}} = 0.9946</math>
 
==Alternatives==
Line 339 ⟶ 335:
Sometimes it is appropriate to force the regression line to pass through the origin, because {{mvar|x}} and {{mvar|y}} are assumed to be proportional. For the model without the intercept term, {{math|''y'' {{=}} ''βx''}}, the OLS estimator for {{mvar|β}} simplifies to
 
: <math display="block">\widehat{\beta} = \frac{ \sum_{i=1}^n x_i y_i }{ \sum_{i=1}^n x_i^2 } = \frac{\overline{x y}}{\overline{x^2}} </math>
 
Substituting {{math|(''x'' − ''h'', ''y'' − ''k'')}} in place of {{math|(''x'', ''y'')}} gives the regression through {{math|(''h'', ''k'')}}:
 
: <math display="block">\begin{align}
\widehat\beta &= \frac{ \sum_{i=1}^n (x_i - h) (y_i - k) }{ \sum_{i=1}^n (x_i - h)^2 } = \frac{\overline{(x - h) (y - k)}}{\overline{(x - h)^2}} \\[6pt]
&= \frac{\overline{x y} - k \bar{x} - h \bar{y} + h k }{\overline{x^2} - 2 h \bar{x} + h^2} \\[6pt]
Line 354 ⟶ 350:
 
==See also==
* [[{{slink|Design matrix#Simple linear regression]]}}
* [[Linear trend estimation]]
* [[Segmented regression|Linear segmented regression]]
Line 361 ⟶ 357:
 
==References==
{{Reflist}}
 
==External links==