Content deleted Content added
Fgnievinski (talk | contribs) |
|||
(37 intermediate revisions by 13 users not shown) | |||
Line 3:
{{Regression bar}}
In [[statistics]], '''simple linear regression''' ('''SLR''') is a [[linear regression]] model with a single [[covariate|explanatory variable]].<ref>{{cite book |last=Seltman |first=Howard J. |date=2008-09-08 |title=Experimental Design and Analysis |url=http://www.stat.cmu.edu/~hseltman/309/Book/Book.pdf |page=227}}</ref><ref name=":0">{{cite web |url=http://ci.columbia.edu/ci/premba_test/c0331/s7/s7_6.html |title=Statistical Sampling and Regression: Simple Linear Regression |publisher=Columbia University |access-date=2016-10-17 |quote=When one independent variable is used in a regression, it is called a simple regression;(...)}}</ref><ref>{{cite book |last=Lane |first=David M. |title=Introduction to Statistics |url=http://onlinestatbook.com/Online_Statistics_Education.pdf |page=462}}</ref><ref>{{Cite journal|last1=Zou KH|last2=Tuncali K|last3=Silverman SG|date=2003|title=Correlation and simple linear regression.|journal=Radiology|language=English|volume=227|issue=3|pages=617–22|issn=0033-8419|oclc=110941167|doi=10.1148/radiol.2273011499|pmid=12773666|url=https://repositorio.unal.edu.co/handle/unal/81200 }}</ref><ref>{{Cite journal|last1=Altman|first1=Naomi|last2=Krzywinski|first2=Martin|date=2015|title=Simple linear regression|journal=Nature Methods|language=English|volume=12|issue=11|pages=999–1000|issn=1548-7091|oclc=5912005539|doi=10.1038/nmeth.3627|pmid=26824102|s2cid=261269711 |doi-access=free}}</ref> That is, it concerns two-dimensional sample points with [[dependent and independent variables|one independent variable and one dependent variable]] (conventionally, the ''x'' and ''y'' coordinates in a [[Cartesian coordinate system]]) and finds a linear function (a non-vertical [[straight line]]) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable.
The adjective ''simple'' refers to the fact that the outcome variable is related to a single predictor.
It is common to make the additional stipulation that the [[ordinary least squares]] (OLS) method should be used: the accuracy of each predicted value is measured by its squared ''[[errors and residuals|residual]]'' (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible
In this case, the slope of the fitted line is equal to the [[Pearson correlation coefficient|correlation]] between {{mvar|y}} and {{mvar|x}} corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass {{math|({{overline|''x''}}, {{overline|''y''}})}} of the data points.
==Formulation and computation==
Consider the [[mathematical model|model]] function
which describes a line with slope {{mvar|β}} and {{mvar|y}}-intercept {{mvar|α}}. In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation the [[errors and residuals|errors]]. Suppose we observe {{mvar|n}} data pairs and call them {{math|{(''x''<sub>''i''</sub>, ''y''<sub>''i''</sub>), ''i'' {{=}} 1, ..., ''n''}}}. We can describe the underlying relationship between {{math|''y''<sub>''i''</sub>}} and {{math|''x''<sub>''i''</sub>}} involving this error term {{math|''ε''<sub>''i''</sub>}} by
This relationship between the true (but unobserved) underlying parameters {{mvar|α}} and {{mvar|β}} and the data points is called a linear regression model.
Line 24 ⟶ 20:
The goal is to find estimated values <math>\widehat\alpha</math> and <math>\widehat\beta</math> for the parameters {{mvar|α}} and {{mvar|β}} which would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in the [[Ordinary least squares|least-squares]] approach: a line that minimizes the [[residual sum of squares|sum of squared residuals]] (see also [[Errors and residuals]]) <math>\widehat\varepsilon_i</math> (differences between actual and predicted values of the dependent variable ''y''), each of which is given by, for any candidate parameter values <math>\alpha</math> and <math>\beta</math>,
In other words, <math>\widehat\alpha</math> and <math>\widehat\beta</math> solve the following [[minimization problem]]:
where the [[objective function]] {{mvar|Q}} is:
<math display="block">Q(\alpha, \beta) = \sum_{i=1}^n\widehat\varepsilon_i^{\,2} = \sum_{i=1}^n (y_i -\alpha - \beta x_i)^2\ .</math>
By expanding to get a quadratic expression in <math>\alpha</math> and <math>\beta,</math> we can derive minimizing values of
\widehat\alpha & = \bar{y} -
\widehat\beta &= \frac{ \sum_{i=1}^n \left(x_i - \bar{x}\right) \left(y_i - \bar{y}\right) }{ \sum_{i=1}^n \left(x_i - \bar{x}\right)^2 }
\end{align}</math>
Line 42 ⟶ 39:
{{unordered list
|<math>\bar x</math> and <math>\bar y</math> as the average of the {{math|''x''<sub>''i''</sub>}} and {{math|''y''<sub>''i''</sub>}}, respectively
|<math>\Delta x_i</math> and <math>\Delta y_i</math> as the [[deviation (statistics)|deviations]] in {{math|''x''<sub>''i''</sub>}} and {{math|''y''<sub>''i''</sub>}} with respect to their respective means.
}}
=== Expanded formulas ===
The above equations are efficient to use if the mean of the x and y variables (<math>\bar{x} \text{ and } \bar{y}</math>) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of the <math>\widehat\alpha\text{ and }\widehat\beta</math> equations. These expanded equations may be derived from the more general [[polynomial regression]] equations<ref name=":1" /><ref>{{Cite web |title=Mathematics of Polynomial Regression |url=http://polynomialregression.drque.net/math.html |website=Polynomial Regression, A PHP regression class}}</ref> by defining the regression polynomial to be of order 1, as follows.
<math display="block">\begin{bmatrix}
n & \sum_{i=1}^n x_i \\[1ex]
\sum_{i=1}^n x_i & \sum_{i=1}^n x_i^2
\end{bmatrix}
\begin{bmatrix}
\widehat\alpha \\[1ex]
\widehat\beta
\end{bmatrix}
=
\begin{bmatrix}
\sum_{i=1}^n y_i \\[1ex]
\sum_{i=1}^n y_i x_i
\end{bmatrix}
</math>
The above [[system of linear equations]] may be solved directly, or stand-alone equations for <math>\widehat\alpha\text{ and }\widehat\beta</math> may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof.<ref>{{Cite web |title=Numeracy, Maths and Statistics - Academic Skills Kit, Newcastle University |url=https://www.ncl.ac.uk/webtemplate/ask-assets/external/maths-resources/statistics/regression-and-correlation/simple-linear-regression.html |access-date=30 Jan 2024 |website=Simple Linear Regression}}</ref><ref name=":1">{{Cite web |last=Muthukrishnan |first=Gowri |date=17 Jun 2018 |title=Maths behind Polynomial regression, Muthukrishnan |url=https://muthu.co/maths-behind-polynomial-regression/ |access-date=30 Jan 2024 |website=Maths behind Polynomial regression}}</ref>
<math display="block">\begin{align}
\widehat\alpha &= \frac{\sum\limits_{i=1}^n y_i \sum\limits_{i=1}^n x_i^2 - \sum\limits_{i=1}^n x_i \sum\limits_{i=1}^n x_i y_i }{n \sum\limits_{i=1}^n x_i^2 - \left(\sum\limits_{i=1}^n x_i\right)^2 }
\\[2ex]
\widehat\beta
&= \frac {n \sum\limits_{i=1}^n x_i y_i - \sum\limits_{i=1}^n x_i \sum\limits_{i=1}^n y_i }{ n \sum\limits_{i=1}^n x_i^2 - \left(\sum\limits_{i=1}^n x_i\right)^2 }
\end{align}</math>
== Interpretation ==
===Relationship with the sample covariance matrix===
The solution can be reformulated using elements of the [[covariance matrix]]:
<math display="block">
\widehat\beta = \frac{ s_{x, y} }{ s^2_{x} } = r_{xy} \frac{s_y}{s_x}
</math>
where
{{unordered list
|{{math|''r''<sub>''xy''</sub>}} is the [[Correlation#Sample correlation coefficient|sample correlation coefficient]] between {{mvar|x}} and {{mvar|y}}
|{{math|''s''<sub>''x''</sub>}} and {{math|''s<sub>y</sub>''}} are the [[standard deviation#Uncorrected sample standard deviation|uncorrected sample standard deviations]] of {{mvar|x}} and {{mvar|y}}
|<math>s^2_x</math> and <math>s_{x, y}</math> are the [[Variance#Sample variance|sample variance]] and [[Sample mean and covariance#Sample covariance|sample covariance]], respectively
}}
Substituting the above expressions for <math>\widehat{\alpha}</math> and <math>\widehat{\beta}</math> into the original solution yields
<math display="block">\frac{ y - \bar{y}}{s_y} = r_{xy} \frac{ x - \bar{x}}{s_x} .</math>
This shows that {{math|''r''<sub>''xy''</sub>}} is the slope of the regression line of the [[Standard score|standardized]] data points (and that this line passes through the origin). Since <math>-1 \leq r_{xy} \leq 1</math> then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known as [[Regression_toward_the_mean#Definition_for_simple_linear_regression_of_data_points|regressions toward the mean]].
Line 59 ⟶ 93:
Generalizing the <math>\bar x</math> notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example:
This notation allows us a concise formula for {{math|''r''<sub>''xy''</sub>}}:
The [[coefficient of determination]] ("R squared") is equal to <math>r_{xy}^2</math> when the model is linear with a single independent variable. See [[Correlation#Pearson's product-moment coefficient|sample correlation coefficient]] for additional details.
===
By multiplying all members of the summation in the numerator by : <math>\frac{x_i - \bar{x}}{x_i - \bar{x}} = 1</math> (thereby not changing it):
\widehat\beta &= \frac{ \sum_{i=1}^n \left(x_i - \bar{x}\right) \left(y_i - \bar{y}\right) }{ \sum_{i=1}^n \left(x_i - \bar{x}\right)^2 } \\[1ex]
&= \frac{ \sum_{i=1}^n \left(x_i - \bar{x}\right)^2 \frac{ &= \sum_{i=1}^n \frac{ \end{align}</math>
We can see that the slope (tangent of angle) of the regression line is the weighted average of <math>\frac{
===
\widehat\alpha & = \bar{y} - \widehat\beta\,\bar{x}, \\[5pt]
\end{align}</math>
Given <math>\widehat\beta = \tan(\theta) = dy / dx \rightarrow dy =
we have <math>y_{\rm intersection} = \bar{y} -
===
In the above formulation, notice that each <math>x_i</math> is a constant ("known upfront") value, while the <math>y_i</math> are random variables that depend on the linear function of <math>x_i</math> and the random term <math>\varepsilon_i</math>. This assumption is used when deriving the standard error of the slope and showing that it is [[Proofs_involving_ordinary_least_squares|unbiased]].
In this framing, when <math>x_i</math> is not actually a [[random variable]], what type of parameter does the empirical correlation <math>r_{xy}</math> estimate? The issue is that for each value i we'll have: <math>E(x_i)=x_i</math> and <math>Var(x_i)=0</math>. A possible interpretation of <math>r_{xy}</math> is to imagine that <math>x_i</math> defines a random variable drawn from the [[Empirical distribution function|empirical distribution]] of the x values in our sample. For example, if x had 10 values from the [[natural numbers]]: [1,2,3...,10], then we can imagine x to be a [[Discrete uniform distribution]]. Under this interpretation all <math>x_i</math> have the same expectation and some positive variance. With this interpretation we can think of <math>r_{xy}</math> as the estimator of the [[Pearson_correlation_coefficient#Definition|Pearson's correlation]] between the random variable y and the random variable x (as we just defined it).
==Numerical properties==
Line 115 ⟶ 131:
| The regression line goes through the ''center of mass'' point, <math>(\bar x,\, \bar y)</math>, if the model includes an intercept term (i.e., not forced through the origin).
| The sum of the residuals is zero if the model includes an intercept term:
| The residuals and {{mvar|x}} values are uncorrelated (whether or not there is an intercept term in the model), meaning:
| The relationship between <math>\rho_{xy}</math> (the [[Pearson_correlation_coefficient#For_a_population|correlation coefficient for the population]]) and the population variances of <math>y</math> (<math>\sigma_y^2</math>) and the error term of <math>\
For extreme values of <math>\rho_{xy}</math> this is self evident. Since when <math>\rho_{xy} = 0</math> then <math>\sigma_\
}}
==
Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a [[statistical model]]. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as [[Homoscedasticity|inhomogeneity]], but this is discussed elsewhere.{{clarify|date=October 2015|reason=where?}}
Line 130 ⟶ 146:
To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residuals {{math|''ε''<sub>i</sub>}} as random variables drawn independently from some distribution with mean zero. In other words, for each value of {{mvar|x}}, the corresponding value of {{mvar|y}} is generated as a mean response {{math|''α'' + ''βx''}} plus an additional random variable {{mvar|ε}} called the ''error term'', equal to zero on average. Under such interpretation, the least-squares estimators <math>\widehat\alpha</math> and <math>\widehat\beta</math> will themselves be random variables whose means will equal the "true values" {{mvar|α}} and {{mvar|β}}. This is the definition of an unbiased estimator.
=== Variance of the mean response ===
Since the data in this context is defined to be (''x'', ''y'') pairs for every observation, the ''mean response'' at a given value of ''x'', say ''x<sub>d</sub>'', is an estimate of the mean of the ''y'' values in the population at the ''x'' value of ''x<sub>d</sub>'', that is <math>\hat{E}(y \mid x_d) \equiv\hat{y}_d\!</math>. The variance of the mean response is given by:<ref>{{cite book|title = Applied Regression Analysis|edition = 3rd|last1 = Draper |first1 = N. R. |last2 = Smith |first2 = H.|publisher = John Wiley|year = 1998|isbn = 0-471-17082-8}}</ref>
<math display="block">\operatorname{Var}\left(\hat{\alpha} + \hat{\beta}x_d\right) = \operatorname{Var}\left(\hat{\alpha}\right) + \left(\operatorname{Var} \hat{\beta}\right)x_d^2 + 2 x_d \operatorname{Cov} \left(\hat{\alpha}, \hat{\beta} \right) .</math>
This expression can be simplified to
<math display="block">\operatorname{Var}\left(\hat{\alpha} + \hat{\beta}x_d\right) =\sigma^2\left(\frac{1}{m} + \frac{\left(x_d - \bar{x}\right)^2}{\sum (x_i - \bar{x})^2}\right),</math>
where ''m'' is the number of data points.
To demonstrate this simplification, one can make use of the identity
<math display="block">\sum_i (x_i - \bar{x})^2 = \sum_i x_i^2 - \frac 1 m \left(\sum_i x_i\right)^2 .</math>
=== Variance of the predicted response ===
{{Further|Prediction interval}}
The ''predicted response'' distribution is the predicted distribution of the residuals at the given point ''x<sub>d</sub>''. So the variance is given by
<math display="block">
\begin{align}
\operatorname{Var}\left(y_d - \left[\hat{\alpha} + \hat{\beta} x_d \right] \right) &= \operatorname{Var} (y_d) + \operatorname{Var} \left(\hat{\alpha} + \hat{\beta}x_d\right) - 2\operatorname{Cov}\left(y_d,\left[\hat{\alpha} + \hat{\beta} x_d \right]\right)\\
&= \operatorname{Var} (y_d) + \operatorname{Var} \left(\hat{\alpha} + \hat{\beta}x_d\right).
\end{align}
</math>
The second line follows from the fact that <math>\operatorname{Cov}\left(y_d,\left[\hat{\alpha} + \hat{\beta} x_d \right]\right)</math> is zero because the new prediction point is independent of the data used to fit the model. Additionally, the term <math>\operatorname{Var} \left(\hat{\alpha} + \hat{\beta}x_d\right)</math> was calculated earlier for the mean response.
Since <math>\operatorname{Var}(y_d)=\sigma^2</math> (a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by
<math display="block">
\begin{align}
\operatorname{Var}\left(y_d - \left[\hat{\alpha} + \hat{\beta} x_d \right] \right) & = \sigma^2 + \sigma^2\left(\frac 1 m + \frac{\left(x_d - \bar{x}\right)^2}{\sum (x_i - \bar{x})^2}\right)\\[4pt]
& = \sigma^2\left(1 + \frac 1 m + \frac{(x_d - \bar{x})^2}{\sum (x_i - \bar{x})^2}\right).
\end{align}
</math>
===Confidence intervals===
The formulas given in the previous section allow one to calculate the ''point estimates'' of {{mvar|α}} and {{mvar|β}} — that is, the coefficients of the regression line for the given set of data. However, those formulas
The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either:
Line 142 ⟶ 196:
====Normality assumption====
Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean {{mvar|β}} and variance <math style="height:1.5em" display="inline">\sigma^2\left/\
where
is the unbiased ''standard error'' estimator of the estimator <math>\widehat{\beta}</math>.
This {{mvar|t}}-value has a [[Student's t-distribution|Student's {{mvar|t}}]]-distribution with {{math|''n'' − 2}} degrees of freedom. Using it we can construct a confidence interval for {{mvar|β}}:
at confidence level {{math|(1 − ''γ'')}}, where <math>t^*_{n - 2}</math> is the <math>\scriptstyle \left(1 \;-\; \frac{\gamma}{2}\right)\text{-th}</math> quantile of the {{math|''t''<sub>''n''−2</sub>}} distribution. For example, if {{math|''γ'' {{=}} 0.05}} then the confidence level is 95%.
Line 160 ⟶ 214:
Similarly, the confidence interval for the intercept coefficient {{mvar|α}} is given by
at confidence level (1 − ''γ''), where
[[Image:Okuns law with confidence bands.svg|thumb|300px|The US "changes in unemployment – GDP growth" regression with the 95% confidence bands.]]
The confidence intervals for {{mvar|α}} and {{mvar|β}} give us the general idea where these regression coefficients are most likely to be. For example, in the [[Okun's law]] regression shown here the point estimates are
The 95% confidence intervals for these estimates are
In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown<ref>Casella, G. and Berger, R. L. (2002), "Statistical Inference" (2nd Edition), Cengage, {{ISBN|978-0-534-24312-8}}, pp. 558–559.</ref> that at confidence level (1 − ''γ'') the confidence band has hyperbolic form given by the equation
When the model assumed the intercept is fixed and equal to 0 (<math>\alpha = 0</math>), the standard error of the slope turns into:
With: <math> \hat{\varepsilon}_i = y_i - \hat y_i</math>
Line 193 ⟶ 247:
This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the [[Ordinary least squares|OLS]] article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead.
|-
! style="text-align:left;" | Height (m), ''x<sub>i</sub>''
Line 238 ⟶ 292:
There are ''n'' = 15 points in this data set. Hand calculations would be started by finding the following five sums:
S_{x} &= \
S_{xx} &= \
S_{xy} &= \
\end{align}</math>
These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors.
\widehat\beta &= \frac{nS_{xy} - S_xS_y}{nS_{xx} - S_x^2} = 61.272 \\[8pt]
\widehat\alpha &= \frac{1}{n}S_y - \widehat{\beta} \frac{1}{n}S_x = -39.062 \\[8pt]
Line 257 ⟶ 311:
The 0.975 quantile of Student's ''t''-distribution with 13 degrees of freedom is {{math|''t''{{sup|*}}<sub style{{=}}"position:relative; left:-0.3em;">13</sub> {{=}} 2.1604}}, and thus the 95% confidence intervals for {{mvar|α}} and {{mvar|β}} are
& \alpha \in [\,\widehat{\alpha} \mp t^*_{13} s_\widehat{\alpha} \,] = [\,{-45.4},\ {-32.7}\,] \\[5pt]
& \beta \in [\,\widehat{\beta} \mp t^*_{13} s_\widehat{\beta} \,] = [\, 57.4,\ 65.1 \,]
\end{align}</math>
The [[Pearson product-moment correlation coefficient|product-moment correlation coefficient]] might also be calculated:
==Alternatives==
[[File:Fitting a straight line to a data with outliers.png|thumb|Calculating the parameters of a linear model by minimizing the squared error.]]
In SLR, there is an underlying assumption that only the dependent variable contains measurement error; if the explanatory variable is also measured with error, then simple regression is not appropriate for estimating the underlying relationship because it will be biased due to [[regression dilution]].
Other estimation methods that can be used in place of ordinary least squares include [[least absolute deviations]] (minimizing the sum of absolute values of residuals) and the [[Theil–Sen estimator]] (which chooses a line whose [[slope]] is the [[median]] of the slopes determined by pairs of sample points).
[[Deming regression]] (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. can lead to a model that attempts to fit the outliers more than the data.
=== Line fitting ===
{{excerpt|Line fitting}}
===Simple linear regression without the intercept term (single regressor) ===
Sometimes it is appropriate to force the regression line to pass through the origin, because {{mvar|x}} and {{mvar|y}} are assumed to be proportional. For the model without the intercept term, {{math|''y'' {{=}} ''βx''}}, the OLS estimator for {{mvar|β}} simplifies to
<math display="block">\widehat{\beta} = \frac{ \sum_{i=1}^n x_i y_i }{ \sum_{i=1}^n x_i^2 } = \frac{\overline{x y}}{\overline{x^2}} </math>
Substituting {{math|(''x'' − ''h'', ''y'' − ''k'')}} in place of {{math|(''x'', ''y'')}} gives the regression through {{math|(''h'', ''k'')}}:
<math display="block">\begin{align}
\widehat\beta &= \frac{ \sum_{i=1}^n (x_i - h) (y_i - k) }{ \sum_{i=1}^n (x_i - h)^2 } = \frac{\overline{(x - h) (y - k)}}{\overline{(x - h)^2}} \\[6pt]
&= \frac{\overline{x y} - k \bar{x} - h \bar{y} + h k }{\overline{x^2} - 2 h \bar{x} + h^2} \\[6pt]
&= \frac{\overline{x y} - \bar{x} \bar{y} + (\bar{x} - h)(\bar{y} - k)}{\overline{x^2} - \bar{x}^2 + (\bar{x} - h)^2} \\[6pt]
&= \frac{\operatorname{Cov}(x,y) + (\bar{x} - h)(\bar{y}-k)}{\operatorname{Var}(x) + (\bar{x} - h)^2},
\end{align}</math>
where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias).
The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.
==See also==
*
* [[Linear trend estimation]]
* [[Segmented regression|Linear segmented regression]]
* [[Proofs involving ordinary least squares]]—derivation of all formulas used in this article in general multidimensional case
* [[Newey–West estimator]]
==References==
==External links==
|