Simple linear regression: Difference between revisions

Content deleted Content added
Added {{Merge from}} tag
Line 27:
:<math>\widehat\varepsilon_i =y_i-\alpha -\beta x_i.</math>
 
In other words, <math>\widehat\alpha</math> and <math>\widehat\beta</math> solve the following [[minimization problem]]:
 
: <math>(\text{Find }\min_{hat\alpha,\, \beta} Q(\alpha, hat\beta), \quad= \textoperatorname{for argmin}\left( Q(\alpha, \beta) = \sum_{i=1}^n\widehat\varepsilon_i^{\right),2} = \sum_{i=1}^n (y_i -\alpha - \beta x_i)^2\ .</math>
where the [[objective function]] {{mvar|Q}} is:
: <math>Q(\alpha, \beta) = \sum_{i=1}^n\widehat\varepsilon_i^{\,2} = \sum_{i=1}^n (y_i -\alpha - \beta x_i)^2\ .</math>
 
By expanding to get a quadratic expression in <math>\alpha</math> and <math>\beta,</math> we can derive minimizing values of <math>\alpha</math> and <math>\beta</math> that minimize the objective function {{mvar|Q}} (these minimizing values arearguments, denoted <math>\widehat{\alpha}</math> and <math>\widehat{\beta}</math>):<ref>Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in ''Mathematics of Statistics'', Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252–285</ref>
 
: <math display="inline">\begin{align}