Simple linear regression: Difference between revisions

Content deleted Content added
It would be more clear to use y hat for the prediction, rather than f, which is nowhere defined in the article.
Formulation and computation: Added expanded equations for the slope and intercept
Line 42:
|<math>\Delta x_i</math> and <math>\Delta y_i</math> as the [[deviation (statistics)|deviations]] in {{math|''x''<sub>''i''</sub>}} and {{math|''y''<sub>''i''</sub>}} with respect to their respective means.
}}
 
=== Expanded Formulas ===
The above equations are efficient to use if the mean of the x and y variables (<math>\bar{x} \text{ and } \bar{y}</math>) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of the <math>\widehat\alpha\text{ and }\widehat\beta</math> equations. These equations are algebraically equivalent to the above equations, and are shown below without derivation.<ref>{{Cite web |title=Numeracy, Maths and Statistics - Academic Skills Kit, Newcastle University |url=https://www.ncl.ac.uk/webtemplate/ask-assets/external/maths-resources/statistics/regression-and-correlation/simple-linear-regression.html |url-status=live |access-date=30 Jan 2024 |website=Simple Linear Regression}}</ref><ref>{{Cite web |last=Muthukrishnan |first=Gowri |date=17 Jun 2018 |title=Maths behind Polynomial regression, Muthukrishnan |url=//https://muthu.co/maths-behind-polynomial-regression/ |url-status=live |access-date=30 Jan 2024 |website=Maths behind Polynomial regression}}</ref>
 
 
<math>\begin{align}
&\qquad
\widehat\alpha = \frac{\sum_{i=1}^ ny_i\sum_{i=1}^ nx^2_i - \sum_{i=1}^ nx_i\sum_{i=1}^ nx_iy_i }{n\sum_{i=1}^ nx^2_i-(\sum_{i=1}^ nx_i)^2 }
\\ [5pt]
\\&\qquad
\widehat\beta
= \frac {n \sum_{i=1}^n x_iy_i-\sum_{i=1}^n x_i\sum_{ i=1}^n y_i }{ n\sum_{i=1}^n x_i^2-(\sum_{i=1}^nx_i)^2 }
 
\\ &\qquad
\end{align}</math>
 
== Interpretation ==