Local regression: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Altered doi-broken-date. | Use this bot. Report bugs. | #UCB_CommandLine
 
(215 intermediate revisions by more than 100 users not shown)
Line 1:
{{short description|Moving average and polynomial regression method for smoothing data}}
[[Image:Loess_curve.svg|thumb|300 px|LOESS curve fitted to a population sampled from a [[sine wave]] with uniform noise added. The LOESS curve approximates the original sine wave. ]]
[[Image:Loess curve.svg|thumb|LOESS curve fitted to a population sampled from a [[sine wave]] with uniform noise added. The LOESS curve approximates the original sine wave.]]
'''LOESS''', or '''LOWESS''' ('''locally weighted scatterplot smoothing'''), is one of many "modern" [[statistical model|modeling methods]] that build on [[classical statistics|"classical" methods]], such as linear and nonlinear [[Regression analysis|least squares regression]]. Modern regression methods are designed to address situations in which the classical procedures do not perform well or cannot be effectively applied without undue labor. LOESS combines much of the simplicity of linear least squares regression with the flexibility of [[Non-linear regression|nonlinear regression]]. It does this by fitting simple models to localized subsets of the data to build up a function that describes the deterministic part of the variation in the data, point by point. In fact, one of the chief attractions of this method is that the data analyst is not required to specify a global function of any form to fit a model to the data, only to fit segments of the data.
{{Regression bar}}
 
'''Local regression''' or '''local polynomial regression''',{{sfn|Fox|Weisberg|2018|loc=Appendix}} also known as '''moving regression''',{{sfn|Harrell|2015|p=29}} is a generalization of the [[moving average]] and [[polynomial regression]].{{sfn|Garimella|2017|p=}}
The trade-off for these features is increased computation. Because it is so computationally intensive, LOESS would have been practically impossible to use in the era when least squares regression was being developed. Most other modern methods for process modeling are similar to LOESS in this respect. These methods have been consciously designed to use our current computational ability to the fullest possible advantage to achieve goals not easily achieved by traditional approaches.
Its most common methods, initially developed for [[scatterplot smoothing]], are '''LOESS''' ('''locally estimated scatterplot smoothing''') and '''LOWESS''' ('''locally weighted scatterplot smoothing'''), both pronounced {{IPAc-en|ˈ|l|oʊ|ɛ|s}} {{respell|LOH|ess}}. They are two strongly related [[non-parametric regression]] methods that combine multiple regression models in a [[k-nearest neighbor algorithm|''k''-nearest-neighbor]]-based meta-model.
In some fields, LOESS is known and commonly referred to as [[Savitzky–Golay filter]]<ref>{{Cite web|url=https://www.mathworks.com/help/signal/ref/sgolayfilt.html|title=Savitzky–Golay filtering – MATLAB sgolayfilt|website=Mathworks.com}}</ref><ref>{{Cite web|url=https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.signal.savgol_filter.html|title=scipy.signal.savgol_filter — SciPy v0.16.1 Reference Guide|website=Docs.scipy.org}}</ref> (proposed 15 years before LOESS).
 
LOESS and LOWESS thus build on [[classical statistics|"classical" methods]], such as linear and nonlinear [[least squares regression]]. They address situations in which the classical procedures do not perform well or cannot be effectively applied without undue labor. LOESS combines much of the simplicity of linear least squares regression with the flexibility of [[Non-linear regression|nonlinear regression]]. It does this by fitting simple models to localized subsets of the data to build up a function that describes the deterministic part of the variation in the data, point by point. In fact, one of the chief attractions of this method is that the data analyst is not required to specify a global function of any form to fit a model to the data, only to fit segments of the data.
Plotting a smooth curve through a set of data points using this statistical technique is called a '''Loess Curve''', particularly when each smoothed value is given by a weighted quadratic least squares regression over the span of values of the y-axis [[scattergram]] criterion variable. When each smoothed value is given by a weighted linear least squares regression over the span, this is known as a '''Lowess curve'''; however, some authorities treat '''Lowess''' and Loess as synonyms.
 
The trade-off for these features is increased computation. Because it is so computationally intensive, LOESS would have been practically impossible to use in the era when least squares regression was being developed. Most other modern methods for process modelling are similar to LOESS in this respect. These methods have been consciously designed to use our current computational ability to the fullest possible advantage to achieve goals not easily achieved by traditional approaches.
==Definition of a LOESS model==
LOESS, originally proposed by Cleveland (1979)<!-- Please list this in a "References" section below. --> and further developed by Cleveland and Devlin (1988)<!-- Please list this in a "References" section below. -->, specifically denotes a method that is (somewhat) more descriptively known as locally weighted polynomial regression. At each point in the [[data set]] a low-degree [[polynomial]] is fitted to a subset of the data, with [[explanatory variable]] values near the point whose [[response variable|response]] is being estimated. The polynomial is fitted using [[weighted least squares]], giving more weight to points near the point whose response is being estimated and less weight to points further away. The value of the regression function for the point is then obtained by evaluating the local polynomial using the explanatory variable values for that data point. The LOESS fit is complete after regression function values have been computed for each of the <math>n</math> data points. Many of the details of this method, such as the degree of the polynomial model and the weights, are flexible. The range of choices for each part of the method and typical defaults are briefly discussed next.
 
A smooth curve through a set of data points obtained with this statistical technique is called a ''loess curve'', particularly when each smoothed value is given by a weighted quadratic least squares regression over the span of values of the ''y''-axis [[scattergram]] criterion variable. When each smoothed value is given by a weighted linear least squares regression over the span, this is known as a ''lowess curve.'' However, some authorities treat ''lowess'' and loess as synonyms.<ref>Kristen Pavlik, US Environmental Protection Agency, ''[https://19january2021snapshot.epa.gov/sites/static/files/2016-07/documents/loess-lowess.pdf Loess (or Lowess)]'', ''Nutrient Steps'', July 2016.</ref><ref name="NIST"/>
==Localized subsets of data==
The '''subsets''' of data used for each weighted least squares fit in LOESS are determined by a nearest neighbors algorithm. A user-specified input to the procedure called the "bandwidth" or "smoothing parameter" determines how much of the data is used to fit each local polynomial. The smoothing parameter, <math>\alpha</math>, is a number between <math>\left(\lambda+1\right)/n</math> and 1, with <math>\lambda</math> denoting the degree of the local polynomial. The value of <math>\alpha</math> is the proportion of data used in each fit. The subset of data used in each weighted least squares fit comprises the <math>n\alpha</math> (rounded to the next largest integer) points whose explanatory variables values are closest to the point at which the response is being estimated.
 
==History==
<math>\alpha</math> is called the smoothing parameter because it controls the flexibility of the LOESS regression function. Large values of <math>\alpha</math> produce the smoothest functions that wiggle the least in response to fluctuations in the data. The smaller <math>\alpha</math> is, the closer the regression function will conform to the data. Using too small a value of the smoothing parameter is not desirable, however, since the regression function will eventually start to capture the random error in the data. Useful values of the smoothing parameter typically lie in the range 0.25 to 0.5 for most LOESS applications.
 
Local regression and closely related procedures have a long and rich history, having been discovered and rediscovered in different fields on multiple occasions. An early work by [[Robert Henderson (mathematician)|Robert Henderson]]<ref>Henderson, R. Note on Graduation by Adjusted Average. Actuarial Society of America Transactions 17, 43--48. [https://archive.org/details/transactions17actuuoft archive.org]</ref> studying the problem of graduation (a term for smoothing used in Actuarial literature) introduced local regression using cubic polynomials.
==Degree of local polynomials==
The local polynomials fit to each subset of the data are almost always of first or second degree; that is, either locally linear (in the straight line sense) or locally quadratic. Using a zero degree polynomial turns LOESS into a weighted [[moving average]]. Such a simple local model might work well for some situations, but may not always approximate the underlying function well enough. Higher-degree polynomials would work in theory, but yield models that are not really in the spirit of LOESS. LOESS is based on the ideas that any function can be well approximated in a small neighborhood by a low-order polynomial and that simple models can be fit to data easily. High-degree polynomials would tend to overfit the data in each subset and are numerically unstable, making accurate computations difficult.
 
Specifically, let <math>Y_j</math> denote an ungraduated sequence of observations. Following Henderson, suppose that only the terms from <math>Y_{-h}</math> to <math>Y_h</math> are to be taken into account when computing the graduated value of <math>Y_0</math>, and <math>W_j</math> is the weight to be assigned to <math>Y_j</math>. Henderson then uses a local polynomial approximation <math>a + b j + c j^2 + d j^3</math>, and sets up the following four equations for the coefficients:
==Weight function==
:<math>
As mentioned above, the weight function gives the most weight to the data points nearest the point of estimation and the least weight to the data points that are furthest away. The use of the weights is based on the idea that points near each other in the explanatory variable space are more likely to be related to each other in a simple way than points that are further apart. Following this logic, points that are likely to follow the local model best influence the local model parameter estimates the most. Points that are less likely to actually conform to the local model have less influence on the local model [[Parameter#statistics|parameter]] [[Statistical estimation|estimates]].
\begin{align}
\sum_{j=-h}^h ( a + b j + c j^2 + d j^3) W_j &= \sum_{j=-h}^h W_j Y_j \\
\sum_{j=-h}^h ( aj + b j^2 + c j^3 + d j^4) W_j &= \sum_{j=-h}^h j W_j Y_j \\
\sum_{j=-h}^h ( aj^2 + b j^3 + c j^4 + d j^5) W_j &= \sum_{j=-h}^h j^2 W_j Y_j \\
\sum_{j=-h}^h ( aj^3 + b j^4 + c j^5 + d j^6) W_j &= \sum_{j=-h}^h j^3 W_j Y_j
\end{align}
</math>
Solving these equations for the polynomial coefficients yields the graduated value, <math>\hat Y_0 = a</math>.
 
Henderson went further. In preceding years, many 'summation formula' methods of graduation had been developed, which derived graduation rules based on summation formulae (convolution of the series of obeservations with a chosen set of weights). Two such rules are the 15-point and 21-point rules of [[John Spencer (Actuary)|Spencer]] (1904).<ref>{{citeQ|Q127775139}}</ref> These graduation rules were carefully designed to have a quadratic-reproducing property: If the ungraduated values exactly follow a quadratic formula, then the graduated values equal the ungraduated values. This is an important property: a simple moving average, by contrast, cannot adequately model peaks and troughs in the data. Henderson's insight was to show that ''any'' such graduation rule can be represented as a local cubic (or quadratic) fit for an appropriate choice of weights.
The traditional weight function used for LOESS is the tri-cube weight function,
:<math>w(x) = (1 - |x|^3)^3 \operatorname{I}\left[\left| x\right| < 1\right] </math>
However, any other weight function that satisfies the properties listed in Cleveland (1979) could also be used. The weight for a specific point in any localized subset of data is obtained by evaluating the weight function at the distance between that point and the point of estimation, after scaling the distance so that the maximum absolute distance over all of the points in the subset of data is exactly one.
 
Further discussions of the historical work on graduation and local polynomial fitting can be found in [[Frederick Macaulay|Macaulay]] (1931),<ref name="mac1931">{{citeQ|Q134465853}}</ref> [[William S. Cleveland|Cleveland]] and [[Catherine Loader|Loader]] (1995);<ref name="slrpm">{{citeQ|Q132138257}}</ref> and [[Lori Murray|Murray]] and [[David Bellhouse (statistician)|Bellhouse]] (2019).<ref>{{cite Q|Q127772934}}</ref>
==Advantages of LOESS==
As discussed above, the biggest advantage LOESS has over many other methods is the fact that it does not require the specification of a function to fit a model to all of the data in the sample. Instead the analyst only has to provide a smoothing parameter value and the degree of the local polynomial. In addition, LOESS is very flexible, making it ideal for modeling complex processes for which no theoretical models exist. These two advantages, combined with the simplicity of the method, make LOESS one of the most attractive of the modern regression methods for applications that fit the general framework of least squares regression but which have a complex deterministic structure.
 
The [[Savitzky-Golay filter]], introduced by [[Abraham Savitzky]] and [[Marcel J. E. Golay]] (1964)<ref>{{cite Q|Q56769732}}</ref> significantly expanded the method. Like the earlier graduation work, their focus was data with an equally-spaced predictor variable, where (excluding boundary effects) local regression can be represented as a [[convolution]]. Savitzky and Golay published extensive sets of convolution coefficients for different orders of polynomial and smoothing window widths.
Although it is less obvious than for some of the other methods related to linear least squares regression, LOESS also accrues most of the benefits typically shared by those procedures. The most important of those is the theory for computing uncertainties for prediction and calibration. Many other tests and procedures used for validation of least squares models can also be extended to LOESS models.
 
Local regression methods started to appear extensively in statistics literature in the 1970s; for example, [[Charles Joel Stone|Charles J. Stone]] (1977),<ref>{{cite Q|Q56533608}}</ref> [[Vladimir Katkovnik]] (1979)<ref>{{citation |first=Vladimir|last=Katkovnik|title=Linear and nonlinear methods of nonparametric regression analysis|journal=Soviet Automatic Control|date=1979|volume=12|issue=5|pages=25–34}}</ref> and [[William S. Cleveland]] (1979).<ref name="cleve79">{{cite Q|Q30052922}}</ref> Katkovnik (1985)<ref name="katbook">{{citeQ|Q132129931}}</ref> is the earliest book devoted primarily to local regression methods.
==Disadvantages of LOESS==
LOESS makes less efficient use of data than other least squares methods. It requires fairly large, densely sampled data sets in order to produce good models. This is not really surprising, however, since LOESS needs good empirical information on the local structure of the process in order to perform the local fitting. In fact, given the results it provides, LOESS could arguably be more efficient overall than other methods like nonlinear least squares. It may simply frontload the costs of an experiment in data collection but then reduce analysis costs.
 
Theoretical work continued to appear throughout the 1990s. Important contributions include [[Jianqing Fan]] and [[Irène Gijbels]] (1992)<ref>{{cite Q|Q132202273}}</ref> studying efficiency properties, and [[David Ruppert]] and [[Matthew P. Wand]] (1994)<ref>{{cite Q|Q132202598}}</ref> developing an asymptotic distribution theory for multivariate local regression.
Another disadvantage of LOESS is the fact that it does not produce a regression function that is easily represented by a mathematical formula. This can make it difficult to transfer the results of an analysis to other people. In order to transfer the regression function to another person, they would need the data set and software for LOESS calculations. In [[nonlinear regression]], on the other hand, it is only necessary to write down a functional form in order to provide estimates of the unknown parameters and the estimated uncertainty. Depending on the application, this could be either a major or a minor drawback to using LOESS.
 
An important extension of local regression is Local Likelihood Estimation, formulated by [[Robert Tibshirani]] and [[Trevor Hastie]] (1987).<ref name="tib-hast-lle">{{cite Q|Q132187702}}</ref> This replaces the local least-squares criterion with a likelihood-based criterion, thereby extending the local regression method to the [[Generalized linear model]] setting; for example binary data, count data or censored data.
Finally, as discussed above, LOESS is a computationally intensive method. This is not usually a problem in our current computing environment, however, unless the data sets being used are very large. LOESS is also prone to the effects of outliers in the data set, like other least squares methods. There is an iterative, [[robust statistics|robust]] version of LOESS [Cleveland (1979)] that can be used to reduce LOESS' sensitivity to [[outliers]], but extreme outliers can still overcome even the robust method.
 
Practical implementations of local regression began appearing in statistical software in the 1980s. Cleveland (1981)<ref>{{cite Q|Q29541549}}</ref> introduces the LOWESS routines, intended for smoothing scatterplots. This implements local linear fitting with a single predictor variable, and also introduces robustness downweighting to make the procedure resistant to outliers. An entirely new implementation, LOESS, is described in Cleveland and [[Susan J. Devlin]] (1988).<ref name="clevedev">{{cite Q|Q29393395}}</ref> LOESS is a multivariate smoother, able to handle spatial data with two (or more) predictor variables, and uses (by default) local quadratic fitting. Both LOWESS and LOESS are implemented in the [[S (programming language)|S]] and [[R (programming language)|R]] programming languages. See also Cleveland's Local Fitting Software.<ref>{{cite web |last=Cleveland|first=William|title=Local Fitting Software|url=http://www.stat.purdue.edu/~wsc/localfitsoft.html|archive-url=https://web.archive.org/web/20050912090738/http://www.stat.purdue.edu/~wsc/localfitsoft.html |archive-date=12 September 2005 }}</ref>
 
While Local Regression, LOWESS and LOESS are sometimes used interchangeably, this usage should be considered incorrect. Local Regression is a general term for the fitting procedure; LOWESS and LOESS are two distinct implementations.
 
==Model definition==
 
Local regression uses a [[data set]] consisting of observations one or more ‘independent’ or ‘predictor’ variables, and a ‘dependent’ or ‘response’ variable. The dataset will consist of a number <math>n</math> observations. The observations of the predictor variable can be denoted <math>x_1,\ldots,x_n</math>, and corresponding observations of the response variable by <math>Y_1,\ldots,Y_n</math>.
 
For ease of presentation, the development below assumes a single predictor variable; the extension to multiple predictors (when the <math>x_i</math> are vectors) is conceptually straightforward. A functional relationship between the predictor and response variables is assumed:
<math display="block">Y_i = \mu(x_i) + \epsilon_i</math>
where <math>\mu(x)</math> is the unknown ‘smooth’ regression function to be estimated, and represents the conditional expectation of the response, given a value of the predictor variables. In theoretical work, the ‘smoothness’ of this function can be formally characterized by placing bounds on higher order derivatives. The <math>\epsilon_i</math> represents random error; for estimation purposes these are assumed to have [[mean]] zero. Stronger assumptions (e.g., [[independence (probability theory)|independence]] and equal [[variance]]) may be made when assessing properties of the estimates.
 
Local regression then estimates the function <math>\mu(x)</math>, for one value of <math>x</math> at a time. Since the function is assumed to be smooth, the most informative data points are those whose <math>x_i</math> values are close to <math>x</math>. This is formalized with a bandwidth <math>h</math> and a [[kernel (statistics)|kernel]] or weight function <math>W(\cdot)</math>, with observations assigned weights
<math display="block">w_i(x) = W\left ( \frac{x_i-x}{h} \right ).</math>
A typical choice of <math>W</math>, used by Cleveland in LOWESS, is <math>W(u) = (1-|u|^3)^3</math> for <math>|u|<1</math>, although any similar function (peaked at <math>u=0</math> and small or 0 for large values of <math>u</math>) can be used. Questions of bandwidth selection and specification (how large should <math>h</math> be, and should it vary depending upon the fitting point <math>x</math>?) are deferred for now.
 
A local model (usually a low-order polynomial with degree <math>p \le 3</math>), expressed as
<math display="block">\mu(x_i) \approx \beta_0 + \beta_1(x_i-x) + \ldots + \beta_p(x_i-x)^p</math>
is then fitted by [[weighted least squares]]: choose regression coefficients
<math>(\hat \beta_0,\ldots,\hat\beta_p)</math> to minimize
<math display="block">
\sum_{i=1}^n w_i(x) \left ( Y_i - \beta_0 - \beta_1(x_i-x) - \ldots - \beta_p(x_i-x)^p \right )^2.
</math>
The local regression estimate of <math>\mu(x)</math> is then simply the intercept estimate:
<math display="block">\hat\mu(x) = \hat\beta_0</math>
while the remaining coefficients can be interpreted
(up to a factor of <math>p!</math>) as derivative estimates.
 
It is to be emphasized that the above procedure produces the estimate <math>\hat\mu(x)</math> for one value of <math>x</math>. When considering a new value of <math>x</math>, a new set of weights <math>w_i(x)</math> must be computed, and the regression coefficient estimated afresh.
 
===Matrix representation of the local regression estimate===
 
As with all least squares estimates, the estimated regression coefficients can be expressed in closed form (see [[Weighted least squares]] for details):
<math display="block">\hat{\boldsymbol{\beta}} = (\mathbf{X^\textsf{T} W X})^{-1} \mathbf{X^\textsf{T} W} \mathbf{y} </math>
where <math>\hat{\boldsymbol{\beta}}</math> is a vector of the local regression coefficients;
<math>\mathbf{X}</math> is the <math>n \times (p+1)</math> [[design matrix]] with entries <math>(x_i-x)^j</math>; <math>\mathbf{W}</math> is a diagonal matrix of the smoothing weights <math>w_i(x)</math>; and <math>\mathbf{y}</math> is a vector of the responses <math>Y_i</math>.
 
This matrix representation is crucial for studying the theoretical properties of local regression estimates. With appropriate definitions of the design and weight matrices, it immediately generalizes to the multiple-predictor setting.
 
==Selection issues: bandwidth, local model, fitting criteria==
 
Implementation of local regression requires specification and selection of several components:
# The bandwidth, and more generally the localized subsets of the data.
# The degree of local polynomial, or more generally, the form of the local model.
# The choice of weight function <math>W(\cdot)</math>.
# The choice of fitting criterion (least squares or something else).
 
Each of these components has been the subject of extensive study; a summary is provided below.
 
===Localized subsets of data; Bandwidth===
 
The bandwidth <math>h</math> controls the resolution of the local regression estimate. If ''h'' is too small, the estimate may show high-resolution features that represent noise in the data, rather than any real structure in the mean function. Conversely, if ''h'' is too large, the estimate will only show low-resolution features, and important structure may be lost. This is the ''bias-variance tradeoff''; if ''h'' is too small, the estimate exhibits large variation; while at large ''h'', the estimate exhibits large bias.
 
Careful choice of bandwidth is therefore crucial when applying local regression. Mathematical methods for bandwidth selection require, firstly, formal criteria to assess the performance of an estimate. One such criterion is prediction error: if a new observation is made at <math>\tilde x</math>, how well does the estimate <math>\hat\mu(\tilde x)</math> predict the new response <math>\tilde Y</math>?
 
Performance is often assessed using a squared-error loss function. The mean squared prediction error is
<math display="block">
\begin{align}
E \left( \tilde Y - \hat\mu(\tilde x) \right )^2
&= E \left ( \tilde Y - \mu(x) + \mu(x) - \hat\mu(\tilde x) \right )^2 \\
&= E \left (\tilde Y - \mu(x) \right )^2
+ E \left ( \mu(x)-\hat\mu(\tilde x) \right )^2.
\end{align}
</math>
The first term <math>E \left (\tilde Y - \mu(x) \right )^2</math> is the random variation of the observation; this is entirely independent of the local regression estimate. The second term, <math> E \left ( \mu(x)-\hat\mu(\tilde x) \right )^2</math>
is the mean squared estimation error. This relation shows that, for squared error loss, minimizing prediction error and estimation error are equivalent problems.
 
In global bandwidth selection, these measures can be integrated over the <math>x</math> space ("mean integrated squared error", often used in theoretical work), or averaged over the actual <math>x_i</math> (more useful for practical implementations). Some standard techniques from model selection can be readily adapted to local regression:
# [[cross-validation (statistics)|Cross Validation]], which estimates the mean-squared prediction error.
# [[Mallow's Cp]] and [[Akaike's Information Criterion]], which estimate mean squared estimation error.
# Other methods which attempt to estimate bias and variance variance components of the estimation error directly.
Any of these criteria can be minimized to produce an automatic bandwidth selector. Cleveland and Devlin<ref name="clevedev" /> prefer a graphical method (the ''M''-plot) to visually display the bias-variance trade-off and guide bandwidth choice.
 
One question not addressed above is, how should the bandwidth depend upon the fitting point <math>x</math>? Often a constant bandwidth is used, while LOWESS and LOESS prefer a nearest-neighbor bandwidth, meaning ''h'' is smaller in regions with many data points. Formally, the smoothing parameter, <math>\alpha</math>, is the fraction of the total number ''n'' of data points that are used in each local fit. The subset of data used in each weighted least squares fit thus comprises the <math>n\alpha</math> points (rounded to the next largest integer) whose explanatory variables' values are closest to the point at which the response is being estimated.<ref name="NIST">NIST, [http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd144.htm "LOESS (aka LOWESS)"], section 4.1.4.4, ''NIST/SEMATECH e-Handbook of Statistical Methods,'' (accessed 14 April 2017)</ref>
 
More sophisticated methods attempt to choose the bandwidth ''adaptively''; that is, choose a bandwidth at each fitting point <math>x</math> by applying criteria such as cross-validation locally within the smoothing window. An early example of this is [[Jerome H. Friedman]]'s<ref>{{citation|first=Jerome H.|last=Friedman|title=A Variable Span Smoother|date=October 1984|publisher=Technical report, Laboratory for Computational Statistics LCS 5; SLAC PUB-3466|doi=10.2171/1447470|doi-broken-date=1 July 2025 |url=http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3477.pdf}}</ref> "supersmoother", which uses cross-validation to choose among local linear fits at different bandwidths.
 
===Degree of local polynomials===
Most sources, in both theoretical and computational work, use low-order polynomials as the local model, with polynomial degree ranging from 0 to 3.
 
The degree 0 (local constant) model is equivalent to a [[kernel smoother]]; usually credited to [[Èlizbar Nadaraya]] (1964)<ref>{{cite Q|Q29303512}}</ref> and [[G. S. Watson]] (1964).<ref>{{citation|last=Watson|first=G. S.|title=Smooth regression analysis|journal=Sankhya Series A|volume=26|pages=359–372}}</ref> This is the simplest model to use, but can suffer from bias when fitting near boundaries of the dataset.
 
Local linear (degree 1) fitting can substantially reduce the boundary bias.
 
Local quadratic (degree 2) and local cubic (degree 3) can result in improved fits, particularly when the underlying mean function <math>\mu(x)</math> has substantial curvature, or equivalently a large second derivative.
 
In theory, higher orders of polynomial can lead to faster convergence of the estimate <math>\hat\mu(x)</math> to the true mean <math>\mu(x)</math>, ''provided that <math>\mu(x)</math> has a sufficient number of derivatives''. See C. J. Stone (1980).<ref>{{cite Q|Q132272803}}</ref> Generally, it takes a large sample size for this faster convergence to be realized. There are also computational and stability issues that arise, particularly for multivariate smoothing. It is generally not recommended to use local polynomials with degree greater than 3.
 
As with bandwidth selection, methods such as cross-validation can be used to compare the fits obtained with different degrees of polynomial.
 
===Weight function===
As mentioned above, the weight function gives the most weight to the data points nearest the point of estimation and the least weight to the data points that are furthest away. The use of the weights is based on the idea that points near each other in the explanatory variable space are more likely to be related to each other in a simple way than points that are further apart. Following this logic, points that are likely to follow the local model best influence the local model parameter estimates the most. Points that are less likely to actually conform to the local model have less influence on the local model [[Parameter#Statistics|parameter]] [[Statistical estimation|estimates]].
 
Cleveland (1979)<ref name="cleve79" /> sets out four requirements for the weight function:
# Non-negative: <math>W(x) > 0</math> for <math>|x| < 1</math>.
# Symmetry: <math>W(-x) = W(x)</math>.
# Monotone: <math>W(x)</math> is a nonincreasing function for <math>x \ge 0</math>.
# Bounded support: <math>W(x)=0</math> for <math>|x| \ge 1</math>.
 
Asymptotic efficiency of weight functions has been considered by [[V. A. Epanechnikov]] (1969)<ref>{{citeQ|Q57308723}}</ref> in the context of kernel density estimation; J. Fan (1993)<ref>{{citeQ|Q132691957}}</ref> has derived similar results for local regression. They conclude that the quadratic kernel, <math>W(x) = 1-x^2</math> for <math>|x|\le1</math> has greatest efficiency under a mean-squared-error loss function. See [[Kernel (statistics)#Kernel functions in common use|"kernel functions in common use"]] for more discussion of different kernels and their efficiencies.
 
Considerations other than MSE are also relevant to the choice of weight function. Smoothness properties of <math>W(x)</math> directly affect smoothness of the estimate <math>\hat\mu(x)</math>. In particular, the quadaratic kernel is not differentiable at <math>x=\pm 1</math>, and <math>\hat\mu(x)</math> is not differentiable as a result.
The [[Kernel (statistics)#Kernel functions in common use|tri-cube weight function]],
<math display="block">W(x) = (1 - |x|^3)^3; |x|<1</math>
has been used in LOWESS and other local regression software; this combines higher-order differentiability with a high MSE efficiency.
 
One criticism of weight functions with bounded support is that they can lead to numerical problems (i.e. an unstable or singular design matrix) when fitting in regions with sparse data. For this reason, some authors{{who|date=April 2025}} choose to use the Gaussian kernel, or others with unbounded support.
 
===Choice of fitting criterion===
 
As described above, local regression uses a locally weighted least squares criterion to estimate the regression parameters. This inherits many of the advantages (ease of implementation and interpretation; good properties when errors are normally distributed) and disadvantages (sensitivity to extreme values and outliers; inefficiency when errors have unequal variance or are not normally distributed) usually associated with least squares regression.
 
These disadvantages can be addressed by replacing the local least-squares estimation by something else. Two such ideas are presented here: local likelihood estimation, which applies local estimation to the [[generalized linear model]], and robust local regression, which localizes methods from [[robust regression]].
 
====Local likelihood estimation====
 
In local likelihood estimation, developed in Tibshirani and Hastie (1987),<ref name="tib-hast-lle" /> the observations <math>Y_i</math> are assumed to come from a parametric family of distributions, with a known probability density function (or mass function, for discrete data),
<math display="block">
Y_i \sim f(y,\theta(x_i)),
</math>
where the parameter function <math>\theta(x)</math> is the unknown quantity to be estimated. To estimate <math>\theta(x)</math> at a particular point <math>x</math>, the local likelihood criterion is
<math display="block">
\sum_{i=1}^n w_i(x) \log \left ( f(Y_i,
\beta_0 + \beta_1(x_i-x) + \ldots + \beta_p (x_i-x)^p \right ).
</math>
Estimates of the regression coefficients (in, particular, <math>\hat\beta_0</math>) are obtained by maximizing the local likelihood criterion, and
the local likelihood estimate is
<math display="block">
\hat\theta(x) = \hat\beta_0.
</math>
 
When <math>f(y,\theta(x))</math> is the normal distribution and <math>\theta(x)</math> is the mean function, the local likelihood method reduces to the standard local least-squares regression. For other likelihood families, there is (usually) no closed-form solution for the local likelihood estimate, and iterative procedures such as [[iteratively reweighted least squares]] must be used to compute the estimate.
 
''Example'' (local logistic regression). All response observations are 0 or 1, and the mean function is the "success" probability, <math>\mu(x_i) = \Pr (Y_i=1 | x_i)</math>. Since <math>\mu(x_i)</math> must be between 0 and 1, a local polynomial model should not be used for <math>\mu(x)</math> directly. Insead, the logistic transformation
<math display="block">
\theta(x) = \log \left ( \frac{\mu(x)}{1-\mu(x)} \right )
</math>
can be used; equivalently,
<math display="block">
\begin{align}
1-\mu(x) &= \frac{1}{1+e^{\theta(x)}} ;\\
\mu(x) &= \frac{e^{\theta(x)}}{1+e^{\theta(x)}}
\end{align}
</math>
and the mass function is
<math display="block">
f(Y_i,\theta(x_i)) = \frac{ e^{Y_i \theta(x_i)}}{1+e^{\theta(x_i)}}.
</math>
 
An asymptotic theory for local likelihood estimation is developed in J. Fan, [[Nancy E. Heckman]] and M.P.Wand (1995);<ref>{{cite Q|Q132508409}}</ref> the book Loader (1999)<ref name="loabook">{{citeQ|Q59410587}}</ref> discusses many more applications of local likelihood.
 
====Robust local regression====
 
To address the sensitivity to outliers, techniques from [[robust regression]] can be employed. In local [[M-estimator|M-estimation]], the local least-squares criterion is replaced by a criterion of the form
<math display="block">
\sum_{i=1}^n w_i(x) \rho \left (
\frac{Y_i-\beta_0 - \ldots - \beta_p(x_i-x)^p}{s}
\right )
</math>
where <math>\rho(\cdot)</math> is a robustness function and <math>s</math> is a scale parameter. Discussion of the merits of different choices of robustness function is best left to the [[robust regression]] literature. The scale parameter <math>s</math> must also be estimated. References for local M-estimation include Katkovnik (1985)<ref name="katbook">{{citeQ|Q132129931}}</ref> and [[Alexandre Tsybakov]] (1986).<ref>{{citation |first=Alexandre B.|last=Tsybakov|title=Robust reconstruction of functions by the local-approximation method.|journal=Problems of Information Transmission|volume=22|pages=133–146}}</ref>
 
The robustness iterations in LOWESS and LOESS correspond to the robustness function defined by
<math display="block">
\rho'(u) = u (1-u^2/6)^2; |u|<1
</math>
and a robust global estimate of the scale parameter.
 
If <math>\rho(u)=|u|</math>, the local <math>L_1</math> criterion
<math display="block">
\sum_{i=1}^n w_i(x) \left | Y_i - \beta_0 - \ldots - \beta_p(x_i-x)^p \right |
</math>
results; this does not require a scale parameter. When <math>p=0</math>, this criterion is minimized by a locally weighted median; local <math>L_1</math> regression can be interpreted as estimating the ''median'', rather than ''mean'', response. If the loss function is skewed, this becomes local quantile regression. See [[Keming Yu]] and [[M. C. Jones (mathematician)|M.C. Jones]] (1998).<ref>{{citation |first1=Keming|last1=Yu|first2=M.C.|last2=Jones|title=Local Linear Quantile Regression|journal=Journal of the American Statistical Association|date=1998 |volume=93|issue=441 |pages=228–237|doi=10.1080/01621459.1998.10474104 }}</ref>
 
==Advantages==
As discussed above, the biggest advantage LOESS has over many other methods is the process of fitting a model to the sample data does not begin with the specification of a function. Instead the analyst only has to provide a smoothing parameter value and the degree of the local polynomial. In addition, LOESS is very flexible, making it ideal for modeling complex processes for which no theoretical models exist. These two advantages, combined with the simplicity of the method, make LOESS one of the most attractive of the modern regression methods for applications that fit the general framework of least squares regression but which have a complex deterministic structure.
 
Although it is less obvious than for some of the other methods related to linear least squares regression, LOESS also accrues most of the benefits typically shared by those procedures. The most important of those is the theory for computing uncertainties for prediction and calibration. Many other tests and procedures used for validation of least squares models can also be extended to LOESS models {{Citation needed|date=July 2011}}.
 
==Disadvantages==
LOESS makes less efficient use of data than other least squares methods. It requires fairly large, densely sampled data sets in order to produce good models. This is because LOESS relies on the local data structure when performing the local fitting. Thus, LOESS provides less complex data analysis in exchange for greater experimental costs.<ref name="NIST" />
 
Another disadvantage of LOESS is the fact that it does not produce a regression function that is easily represented by a mathematical formula. This can make it difficult to transfer the results of an analysis to other people. In order to transfer the regression function to another person, they would need the data set and software for LOESS calculations. In [[nonlinear regression]], on the other hand, it is only necessary to write down a functional form in order to provide estimates of the unknown parameters and the estimated uncertainty. Depending on the application, this could be either a major or a minor drawback to using LOESS. In particular, the simple form of LOESS can not be used for mechanistic modelling where fitted parameters specify particular physical properties of a system.
 
Finally, as discussed above, LOESS is a computationally intensive method (with the exception of evenly spaced data, where the regression can then be phrased as a non-causal [[finite impulse response]] filter). LOESS is also prone to the effects of outliers in the data set, like other least squares methods. There is an iterative, [[robust statistics|robust]] version of LOESS [Cleveland (1979)] that can be used to reduce LOESS' sensitivity to [[outliers]], but too many extreme outliers can still overcome even the robust method.
 
==Further reading==
 
Books substantially covering local regression and extensions:
* Macaulay (1931) "The Smoothing of Time Series",<ref name="mac1931">{{citeQ|Q134465853}}</ref> discusses graduation methods with several chapters related to local polynomial fitting.
* Katkovnik (1985) "Nonparametric Identification and Smoothing of Data"<ref name="katbook">{{citeQ|Q132129931}}</ref> in Russian.
* Fan and Gijbels (1996) "Local Polynomial Modelling and Its Applications".<ref>{{citeQ|Q134377589}}</ref>
* Loader (1999) "Local Regression and Likelihood".<ref name="loabook">{{citeQ|Q59410587}}</ref>
* Fotheringham, Brunsdon and Charlton (2002), "Geographically Weighted Regression"<ref name="gwrbook">{{citeQ|Q133002722}}</ref> (a development of local regression for spatial data).
 
Book chapters, Reviews:
* "Smoothing by Local Regression: Principles and Methods"<ref name="slrpm">{{citeQ|Q132138257}}</ref>
* "Local Regression and Likelihood", Chapter 13 of ''Observed Brain Dynamics'', Mitra and Bokil (2007)<ref>{{citeQ|Q57575432}}</ref>
* [[Rafael Irizarry (scientist)|Rafael Irizarry]], "Local Regression". Chapter 3 of "Applied Nonparametric and Modern Statistics".<ref>{{cite web|last=Irizarry|first=Rafael|title=Applied Nonparametric and Modern Statistics|url=https://rafalab.dfci.harvard.edu/pages/754/|access-date=2025-05-16}}</ref>
 
==See also==
*[[Degrees of freedom (statistics)#In non-standard regression]]
*[[Kernel regression]]
*[[Moving least squares]]
*[[Moving average]]
*[[Multivariate adaptive regression splines]]
*[[Non-parametric statistics]]
*[[Savitzky–Golay filter]]
*[http://research.stowers-institute.org/efg/R/Statistics/loess.htm The Loess function] in [[R_(programming_language)|R]]
*[[Segmented regression]]
 
==References==
===Citations===
*{{cite journal
{{reflist}}
|last1=Cleveland |first1=W.S.
 
|year=1979
===Sources===
|title=Robust Locally Weighted Regression and Smoothing Scatterplots
{{refbegin|30em|indent=yes}}
|journal=[[Journal of the American Statistical Association]]
*{{cite book|last1=Fox|first1=John |last2=Weisberg|first2=Sanford |title=An R Companion to Applied Regression|url=https://books.google.com/books?id=SfNrDwAAQBAJ|edition=3rd|date= 2018|publisher=SAGE |isbn=978-1-5443-3645-9|chapter=Appendix: Nonparametric Regression in R|chapter-url=https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendices/Appendix-Nonparametric-Regression.pdf}}
|volume=74 |issue=368 |pages=829&ndash;836
*{{Cite report|title=A Simple Introduction to Moving Least Squares and Local Regression Estimation|first=Rao Veerabhadra|last=Garimella|date=22 June 2017|publisher=Los Alamos National Laboratory|osti = 1367799|doi=10.2172/1367799}}
|id={{MR|0556476}}. {{JSTOR|2286407}}
*{{cite book|last=Harrell |first=Frank E. Jr. |title=Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis|url=https://books.google.com/books?id=94RgCgAAQBAJ&pg=PA29|year=2015|publisher=Springer|isbn=978-3-319-19425-7}}
}}
{{refend}}
* {{cite journal
|last1=Cleveland |first1=W.S.
|last2=Devlin |first2=S.J.
|year=1988
|title=Locally-Weighted Regression: An Approach to Regression Analysis by Local Fitting
|journal=[[Journal of the American Statistical Association]]
|volume=83 |issue=402 |pages=596&ndash;610
|id={{JSTOR|2289282}}
}}
 
==External links==
{{external links|date=November 2021}}
*[http://voteforamerica.net/editorials/Comments.aspx?ArticleId=28&ArticleName=Electoral+Projections+Using+LOESS Local Regression and Election Modeling]
*[http://www.stat.purdue.edu/~wsc/papers/localregression.principles.ps Smoothing by Local Regression: Principles and Methods (PostScript Document)]
*[http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd144.htm NIST Engineering Statistics Handbook Section on LOESS]
*[httphttps://www.stat.purdueethz.educh/~wscR-manual/localfitsoftR-devel/library/stats/html/loess.html R: Local Polynomial Regression Fitting] The Loess function in [[R (programming Softwarelanguage)|R]]
*[https://stat.ethz.ch/R-manual/R-devel/library/stats/html/lowess.html R: Scatter Plot Smoothing] The Lowess function in [[R (programming language)|R]]
*[http://peltiertech.com/WordPress/loess-smoothing-in-excel/ LOESS Smoothing in Excel]
*[httphttps://stat.ethz.ch/R-manual/R-patcheddevel/library/stats/html/lowesssupsmu.html ScatterThe Plotsupsmu Smoothingfunction] (Friedman's SuperSmoother) in R
*[http://www.r-statistics.com/2010/04/quantile-loess-combining-a-moving-quantile-window-with-loess-r-function/ Quantile LOESS] – A method to perform Local regression on a '''Quantile'' moving window (with R code)
*[http://fivethirtyeight.blogs.nytimes.com/2013/03/26/how-opinion-on-same-sex-marriage-is-changing-and-what-it-means/?hp Nate Silver, How Opinion on Same-Sex Marriage Is Changing, and What It Means] – sample of LOESS versus linear regression
 
{{NIST-PD}}
 
[[Category:Regression analysis]]
{{DEFAULTSORT:Local Regression}}
[[Category:Nonparametric regression]]