Local regression: Difference between revisions

Content deleted Content added
Zaqrfv (talk | contribs)
m fix cites
Zaqrfv (talk | contribs)
m fix duplicate ref
Line 30:
Henderson went further. In preceding years, many 'summation formula' methods of graduation had been developed, which derived graduation rules based on summation formulae (convolution of the series of obeservations with a chosen set of weights). Two such rules are the 15-point and 21-point rules of [[John Spencer (Actuary)|Spencer]] (1904)<ref>{{citeQ|Q127775139}}</ref>. These graduation rules were carefully designed to have a quadratic-reproducing property: If the ungraduated values happen to be exactly follow a quadratic formula, then the graduated values equal the ungraduated values. This is an important property: a simple moving average, by contrast, cannot adequately model peaks and troughs in the data. Henderson's insight was to show that ''any'' such graduation rule can be represented as a local cubic (or quadratic) fit for an appropriate choice of weights.
 
Further discussions of the historical work on graduation and local polynomial fitting can be found in [[Frederick Macaulay|Maculay]] (1931)<ref name="mac1931">{{citeQ|Q134465853}}</ref>, [[William S. Cleveland|Cleveland]] and [[Catherine Loader|Loader]] (1995);<ref name="slrpm">{{cite QciteQ|Q132138257}}</ref> and [[Lori Murray|Murray]] and [[David Bellhouse (statistician)|Bellhouse]] (2019)<ref>{{cite Q|Q127772934}}</ref> discuss more of the historical work on graduation.
 
The [[Savitzky-Golay filter]], introduced by [[Abraham Savitzky]] and [[Marcel J. E. Golay]] (1964)<ref>{{cite Q|Q56769732}}</ref> significantly expanded the method. Like the earlier graduation work, their focus was data with an equally-spaced predictor variable, where (excluding boundary effects) local regression can be represented as a [[convolution]]. Savitzky and Golay published extensive sets of convolution coefficients for different orders of polynomial and smoothing window widths.