Content deleted Content added
m Task 16: replaced (2×) / removed (0×) deprecated |dead-url= and |deadurl= with |url-status=; |
m Task 18 (cosmetic): eval 8 templates: del empty params (4×); hyphenate params (5×); |
||
Line 1:
{{machine learning bar}}
In [[machine learning]], a '''probabilistic classifier''' is a [[statistical classification|classifier]] that is able to predict, given an observation of an input, a [[probability distribution]] over a [[Set (mathematics)|set]] of classes, rather than only outputting the most likely class that the observation should belong to. Probabilistic classifiers provide classification that can be useful in its own right<ref>{{cite book |first1=Trevor |last1=Hastie |first2=Robert |last2=Tibshirani |first3=Jerome |last3=Friedman |year=2009 |title=The Elements of Statistical Learning |url=http://statweb.stanford.edu/~tibs/ElemStatLearn/ |page=348 |quote=[I]n [[data mining]] applications the interest is often more in the class probabilities <math>p_\ell(x), \ell = 1, \dots, K</math> themselves, rather than in performing a class assignment. |url-status=dead |
==Types of classification==
Line 23:
==Probability calibration==
Not all classification models are naturally probabilistic, and some that are, notably naive Bayes classifiers, [[decision tree learning|decision trees]] and [[Boosting (machine learning)|boosting]] methods, produce distorted class probability distributions.<ref name="Niculescu">{{cite conference |last1=Niculescu-Mizil |first1=Alexandru |first2=Rich |last2=Caruana |title=Predicting good probabilities with supervised learning |conference=ICML |year=2005 |url=http://machinelearning.wustl.edu/mlpapers/paper_files/icml2005_Niculescu-MizilC05.pdf |doi=10.1145/1102351.1102430 |url-status=dead |
[[File:Calibration plot.png|thumb|An example calibration plot]] Calibration can be assessed using a '''calibration plot''' (also called a '''reliability diagram''').<ref name="Niculescu" /><ref>{{Cite web|url=https://jmetzen.github.io/2015-04-14/calibration.html|title=Probability calibration|website=jmetzen.github.io|access-date=2019-06-18}}</ref> A calibration plot shows the proportion of items in each class for bands of predicted probability or score (such as a distorted probability distribution or the "signed distance to the hyperplane" in a support vector machine). Deviations from the identity function indicate a poorly-calibrated classifier for which the predicted probabilities or scores can not be used as probabilities. In this case one can use a method to turn these scores into properly [[Calibration (statistics)|calibrated]] class membership probabilities.
For the [[binary classification|binary]] case, a common approach is to apply [[Platt scaling]], which learns a [[logistic regression]] model on the scores.<ref name="platt99">{{cite journal |last=Platt |first=John |
An alternative method using [[isotonic regression]]<ref>{{Cite book | last1 = Zadrozny | first1 = Bianca| last2 = Elkan | first2 = Charles| doi = 10.1145/775047.775151 | chapter = Transforming classifier scores into accurate multiclass probability estimates | chapter-url = http://www.cs.cornell.edu/courses/cs678/2007sp/ZadroznyElkan.pdf| title = Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '02 | pages = 694–699| year = 2002 | isbn = 978-1-58113-567-1
In the [[multiclass classification|multiclass]] case, one can use a reduction to binary tasks, followed by univariate calibration with an algorithm as described above and further application of the pairwise coupling algorithm by Hastie and Tibshirani.<ref>{{Cite journal | last1 = Hastie | first1 = Trevor| last2 = Tibshirani | first2 = Robert| doi = 10.1214/aos/1028144844 | title = Classification by pairwise coupling | journal = [[The Annals of Statistics]] | volume = 26 | issue = 2 | pages = 451–471| year = 1998
==Evaluating probabilistic classification==
|