Content deleted Content added
See https://www.oxfordlearnersdictionaries.com/spellcheck/english/?q=dataset and http://www.tfd.com/dataset |
Owen Reich (talk | contribs) Link suggestions feature: 3 links added. |
||
(7 intermediate revisions by 6 users not shown) | |||
Line 2:
[[Image:Total least squares.svg|thumb|Deming regression. The red lines show the error in both ''x'' and ''y''. This is different from the traditional least squares method, which measures error parallel to the ''y'' axis. The case shown, with deviations measured perpendicularly, arises when errors in ''x'' and ''y'' have equal variances.]]
In [[statistics]], '''Deming regression''', named after [[W. Edwards Deming]], is an [[errors-in-variables model]] that tries to find the [[line of best fit]] for a two-dimensional [[data set]]. It differs from the [[simple linear regression]] in that it accounts for [[errors and residuals in statistics|errors]] in observations on both the ''x''- and the ''y''- axis. It is a special case of [[total least squares]], which allows for any number of predictors and a more complicated error structure.
Deming regression is equivalent to the [[maximum likelihood]] estimation of an [[errors-in-variables model]] in which the errors for the two variables are assumed to be independent and [[normal distribution|normally distributed]], and the ratio of their variances, denoted ''δ'', is known.{{sfn|Linnet|1993}} In practice, this ratio might be estimated from related data-sources; however the regression procedure takes no account for possible errors in estimating this ratio.
Line 25:
: <math>y^* = \beta_0 + \beta_1 x^*,</math>
such that the weighted sum of squared residuals of the model is minimized:{{sfn|Fuller|1987|loc=Ch. 1.3.3}}
: <math>SSR = \sum_{i=1}^n\bigg(\frac{\varepsilon_i^2}{\sigma_\varepsilon^2} + \frac{\eta_i^2}{\sigma_\eta^2}\bigg) = \frac{1}{\sigma_\
See {{harvtxt|Jensen|2007}} for a full derivation.
Line 38:
Finally, the least-squares estimates of model's parameters will be{{sfn|Glaister|2001}}
: <math>\begin{align}
& \hat\beta_1 = \frac{s_{yy}-\delta s_{xx} + \sqrt{(s_{yy}-\delta s_{xx})^2 + 4\delta s_{xy}^2}}{2s_{xy}}, \\
Line 48 ⟶ 45:
==Orthogonal regression==
For the case of equal error variances, i.e., when <math>\delta=1</math>, Deming regression becomes '''orthogonal regression''': it minimizes the sum of squared [[distance from a point to a line|perpendicular distances from the data points to the regression line]]. In this case, denote each observation as a point <math>z_j = x_j +i y_j</math> in the [[complex plane]] (i.e., the point <math>(x_j, y_j)</math> where <math>i</math> is the [[imaginary unit]]). Denote as <math>S=\sum{(z_j - \overline z)^2}</math> the sum of the squared differences of the data points from the [[centroid]] <math>\overline z = \tfrac{1}{n} \sum z_j</math> (also denoted in complex coordinates), which is the point whose horizontal and vertical locations are the averages of those of the data points. Then:{{sfn|Minda|Phelps|2008|loc=Theorem 2.3}}
*If <math>S=0</math>, then every line through the centroid is a line of best orthogonal fit.
*If <math>S \neq 0</math>, the orthogonal regression line goes through the centroid and is parallel to the vector from the origin to <math>\sqrt{S}</math>.
A [[trigonometry|trigonometric]] representation of the orthogonal regression line was given by Coolidge in 1913.{{sfn|Coolidge|1913}} The [[Distance_from_a_point_to_a_line#Another_formula|distance]] can also be calculated using the more typical equation of a line, given as <math>y=mx+k</math>.
===Application===
In the case of three [[Line (geometry)|non-collinear]] points in the plane, the [[triangle]] with these points as its [[vertex (geometry)|vertices]] has a unique [[Steiner inellipse]] that is tangent to the triangle's sides at their midpoints. The [[Ellipse#Elements of an ellipse|major axis of this ellipse]] falls on the orthogonal regression line for the three vertices.{{sfn|Minda|Phelps|2008|loc=Corollary 2.4}} The quantification of a biological cell's intrinsic [[cellular noise]] can be quantified upon applying Deming regression to the observed behavior of a two reporter [[synthetic biological circuit]].{{sfn|Quarton|2020}}
When humans are asked to draw a linear regression on a scatterplot by guessing, their answers are closer to orthogonal regression than to [[ordinary least squares]] regression.<ref>{{cite journal |last1=Ciccione |first1=Lorenzo |last2=Dehaene |first2=Stanislas |title=Can humans perform mental regression on a graph? Accuracy and bias in the perception of scatterplots |journal=Cognitive Psychology |date=August 2021 |volume=128 |pages=101406 |doi=10.1016/j.cogpsych.2021.101406|doi-access=free }}</ref>
== York regression ==
Line 77 ⟶ 76:
* {{cite book|last=Fuller|first=Wayne A.|year=1987|title=Measurement error models|publisher=John Wiley & Sons, Inc|isbn=0-471-86187-1}}
* {{cite journal |last1 = Glaister | first1 = P. | year = 2001 | title = Least squares revisited | journal = [[The Mathematical Gazette]] | volume = 85 | pages = 104–107 | doi=10.2307/3620485| jstor = 3620485 | s2cid = 125949467 }}
* {{cite web |last=Jensen |first=Anders Christian |year=2007 |title=Deming regression, MethComp package |url=
* {{cite book|last=Koopmans|first=T. C.|year=1936|title=Linear regression analysis of economic time series|publisher=DeErven F. Bohn, Haarlem, Netherlands}}
* {{cite journal
|