Content deleted Content added
No edit summary Tag: references removed |
→History and usage: this method is NOT robust if the data is containing values multiple times, but MLE is robust on this kind of contamination. |
||
Line 10:
==History and usage==
The MSE method was derived independently by Russel Cheng and Nik Amin at the [[Cardiff University|University of Wales Institute of Science and Technology]], and Bo Ranneby at the [[Swedish University of Agricultural Sciences]].<ref name = "R84" /> The authors explained that due to the [[probability integral transform]] at the true parameter, the “spacing” between each observation should be uniformly distributed. This would imply that the difference between the values of the [[cumulative distribution function]] at consecutive observations should be equal. This is the case that maximizes the [[geometric mean]] of such spacings, so solving for the parameters that maximize the geometric mean would achieve the “best” fit as defined this way. {{harvtxt|Ranneby|1984}} justified the method by demonstrating that it is an estimator of the [[Kullback–Leibler divergence]], similar to [[maximum likelihood estimation]], but with more robust properties for
There are certain distributions, especially those with three or more parameters, whose [[Likelihood#Likelihoods for continuous distributions|likelihoods]] may become infinite along certain paths in the [[parameter space]]. Using maximum likelihood to estimate these parameters often breaks down, with one parameter tending to the specific value that causes the likelihood to be infinite, rendering the other parameters inconsistent. The method of maximum spacings, however, being dependent on the difference between points on the cumulative distribution function and not individual likelihood points, does not have this issue, and will return valid results over a much wider array of distributions.<ref name = "CA83" />
|