Content deleted Content added
→History: more details of Henderson's formulation |
Start "further reading" section |
||
Line 30:
Henderson went further. In preceding years, many 'summation formula' methods of graduation had been developed, which derived graduation rules based on summation formulae (convolution of the series of obeservations with a chosen set of weights). Two such rules are the 15-point and 21-point rules of [[John Spencer (Actuary)|Spencer]] (1904)<ref>{{citeQ|Q127775139}}</ref>. These graduation rules were carefully designed to have a quadratic-reproducing property: If the ungraduated values happen to be exactly follow a quadratic formula, then the graduated values equal the ungraduated values. This is an important property: a simple moving average, by contrast, cannot adequately model peaks and troughs in the data. Henderson's insight was to show that ''any'' such graduation rule can be represented as a local cubic (or quadratic) fit for an appropriate choice of weights.
Further discussions of the historical work on graduation and local polynomial fitting can be found in [[Frederick Macaulay|Maculay]] (1931)<ref name="mac1931">{{citeQ|Q134465853}}</ref>, [[William S. Cleveland|Cleveland]] and [[Catherine Loader|Loader]] (1995);<ref>{{cite Q|Q132138257}}</ref> and [[Lori Murray|Murray]] and [[David Bellhouse (statistician)|Bellhouse]] (2019)<ref>{{cite Q|Q127772934}}</ref> discuss more of the historical work on graduation.
The [[Savitzky-Golay filter]], introduced by [[Abraham Savitzky]] and [[Marcel J. E. Golay]] (1964)<ref>{{cite Q|Q56769732}}</ref> significantly expanded the method. Like the earlier graduation work, their focus was data with an equally-spaced predictor variable, where (excluding boundary effects) local regression can be represented as a [[convolution]]. Savitzky and Golay published extensive sets of convolution coefficients for different orders of polynomial and smoothing window widths.
Line 189:
</math>
An asymptotic theory for local likelihood estimation is developed in J. Fan, [[Nancy E. Heckman]] and M.P.Wand (1995);<ref>{{cite Q|Q132508409}}</ref> the book Loader (1999)<ref name="loabook">{{cite Q|Q59410587}}</ref> discusses many more applications of local likelihood.
====Robust local regression====
Line 224:
Finally, as discussed above, LOESS is a computationally intensive method (with the exception of evenly spaced data, where the regression can then be phrased as a non-causal [[finite impulse response]] filter). LOESS is also prone to the effects of outliers in the data set, like other least squares methods. There is an iterative, [[robust statistics|robust]] version of LOESS [Cleveland (1979)] that can be used to reduce LOESS' sensitivity to [[outliers]], but too many extreme outliers can still overcome even the robust method.
==Further Reading==
Books substantially covering to local regression and extensions:
* Macaulay (1931) "The Smoothing of Time Series"<ref name="mac1931">{{citeQ|Q134465853}}</ref>, discusses graduation methods with several chapters related to local polynomial fitting.
* Katkovnik (1985) "Nonparametric Identification and Smoothing of Data"<ref name="katbook">{{citeQ|Q132129931}}</ref> in Russian.
* Fan and Gijbels (1996) "Local Polynomial Modelling and Its Applications".<ref>{{citeQ|Q134377589}}</ref>
* Loader (1999) "Local Regression and Likelihood".<ref name="loabook">{{citeQ|Q59410587}}</ref>
* Fotheringham, Brunsdon and Charlton (2002), "Geographically Weighted Regression"<ref name="gwrbook">{{citeQ|Q133002722}}</ref> (a development of local regression for spatial data).
Book chapters, Reviews:
* "Smoothing by Local Regression: Principles and Methods"<ref name="slrpm>{{citeQ|Q132138257}}</ref>
* "Local Regression and Likelihood", Chapter 13 of ''Observed Brain Dynamics'', Mitra and Bokil (2007)<ref>{{citeQ|Q57575432}}</ref>
==See also==
|