Content deleted Content added
m Open access bot: doi updated in citation with #oabot. |
|||
(14 intermediate revisions by 12 users not shown) | |||
Line 7:
Tree models where the target variable can take a discrete set of values are called '''[[Statistical classification|classification]] [[decision tree|trees]]'''; in these tree structures, [[leaf node|leaves]] represent class labels and branches represent [[Logical conjunction|conjunction]]s of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically [[real numbers]]) are called '''[[regression analysis|regression]] [[decision tree|trees]]'''. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.<ref name=":1">{{Cite journal |last1=Studer |first1=Matthias |last2=Ritschard |first2=Gilbert |last3=Gabadinho |first3=Alexis |last4=Müller |first4=Nicolas S. |date=2011 |title=Discrepancy Analysis of State Sequences |url=http://journals.sagepub.com/doi/10.1177/0049124111415372 |journal=Sociological Methods & Research |language=en |volume=40 |issue=3 |pages=471–510 |doi=10.1177/0049124111415372 |s2cid=13307797 |issn=0049-1241}}</ref>
Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity because they produce algorithms that are easy to interpret and visualize, even for users without a statistical background.<ref>{{Cite journal|last1=Wu|first1=Xindong|last2=Kumar|first2=Vipin|last3=Ross Quinlan|first3=J.|last4=Ghosh|first4=Joydeep|last5=Yang|first5=Qiang|last6=Motoda|first6=Hiroshi|last7=McLachlan|first7=Geoffrey J.|last8=Ng|first8=Angus|last9=Liu|first9=Bing|last10=Yu|first10=Philip S.|last11=Zhou|first11=Zhi-Hua|date=2008-01-01|title=Top 10 algorithms in data mining|journal=Knowledge and Information Systems|language=en|volume=14|issue=1|pages=1–37|doi=10.1007/s10115-007-0114-2|s2cid=2367747|issn=0219-3116|hdl=10983/15329|hdl-access=free}}</ref>
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and [[decision making]]. In [[data mining]], a decision tree describes data (but the resulting classification tree can be an input for decision making).
Line 23:
|isbn=978-9814590075
|s2cid=44697571
}}</ref> The goal is to create
A decision tree is a simple representation for classifying examples. For this section, assume that all of the input [[Feature (machine learning)|feature]]s have finite discrete domains, and there is a single target feature called the "classification". Each element of the ___domain of the classification is called a ''class''.
Line 39:
The dependent variable, <math>Y</math>, is the target variable that we are trying to understand, classify or generalize. The vector <math>\textbf{x}</math> is composed of the features, <math>x_1, x_2, x_3</math> etc., that are used for that task.
[[File:Cart tree kyphosis.png|thumb|
alt=Three different representations of a regression tree of kyphosis data|
An example tree which estimates the probability of
Line 53:
]]
== Decision tree types ==
Decision trees used in [[data mining]] are of two main types:
* '''[[Classification tree]]''' analysis is when the predicted outcome is the class (discrete) to which the data belongs.
Line 71:
Some techniques, often called ''ensemble'' methods, construct more than one decision tree:
* '''[[Gradient boosted trees|Boosted trees]]''' Incrementally building an ensemble by training each new instance to emphasize the training instances previously mis-modeled. A typical example is [[AdaBoost]]. These can be used for regression-type and classification-type problems.<ref>Friedman, J. H. (1999). ''[https://astro.temple.edu/~msobel/courses_files/StochasticBoosting(gradient).pdf Stochastic gradient boosting] {{Webarchive|url=https://web.archive.org/web/20181128041212/https://astro.temple.edu/~msobel/courses_files/StochasticBoosting(gradient).pdf |date=2018-11-28 }}.'' Stanford University.</ref><ref>Hastie, T., Tibshirani, R., Friedman, J. H. (2001). ''The elements of statistical learning : Data mining, inference, and prediction.'' New York: Springer Verlag.</ref>
* '''Committees of decision trees''' (also called k-DT<ref>Heath, D., Kasif, S. and Salzberg, S. (1993). ''k-DT: A multi-tree learning method.'' In
* '''[[Bootstrap aggregating|Bootstrap aggregated]]''' (or bagged) decision trees, an early ensemble method, builds multiple decision trees by repeatedly [[Bootstrapping (statistics)|resampling training data with replacement]], and voting the trees for a consensus prediction.<ref>{{cite journal |last=Breiman |first=L. |year=1996 |title=Bagging Predictors |journal=Machine Learning |volume=24 |issue=2 |pages=123–140 |doi=10.1007/BF00058655 |doi-access=free }}</ref>
** A '''[[random forest]]''' classifier is a specific type of [[bootstrap aggregating]]
Line 82:
* [[C4.5 algorithm|C4.5]] (successor of ID3)
* CART (Classification And Regression Tree)<ref name="bfos" />
* OC1 (Oblique classifier 1). First method that created multivariate splits at each node.<ref>{{Cite journal | doi = 10.1613/jair.63 | last1 = Murthy | first1 = S. K. | year = 1994 | title = A System for Induction of Oblique Decision Trees | journal = Journal of Artificial Intelligence Research | volume = 2 | issue = 1 | pages =
* [[Chi-square automatic interaction detection]] (CHAID). Performs multi-level splits when computing classification trees.<ref>{{Cite journal | doi = 10.2307/2986296 | last1 = Kass | first1 = G. V. | year = 1980 | title = An exploratory technique for investigating large quantities of categorical data | jstor = 2986296| journal = Applied Statistics | volume = 29 | issue = 2| pages = 119–127 }}</ref><ref>{{Cite journal|last1=Biggs|first1=David|last2=De Ville|first2=Barry|last3=Suen|first3=Ed|date=1991|title=A method of choosing multiway partitions for classification and decision trees|url=https://doi.org/10.1080/02664769100000005|journal=Journal of Applied Statistics|volume=18|issue=1|pages=49–62|doi=10.1080/02664769100000005|bibcode=1991JApSt..18...49B |issn=0266-4763|url-access=subscription}}</ref><ref>Ritschard, G. (2013), '''"'''CHAID and Earlier Supervised Tree Methods", in J.J. McArdle and G. Ritschard (eds), ''Contemporary Issues in Exploratory Data Mining in the Behavioral Sciences'', Quantitative Methodology Series, New York: Routledge, pages 48-74. [https://www.researchgate.net/publication/315476407_CHAID_and_Earlier_Supervised_Tree_Methods Preprint]</ref>
* [[Multivariate adaptive regression splines|MARS]]: extends decision trees to handle numerical data better.
* Conditional Inference Trees. Statistics-based approach that uses non-parametric tests as splitting criteria, corrected for multiple testing to avoid overfitting. This approach results in unbiased predictor selection and does not require pruning.<ref name="Hothorn2006">{{Cite journal | doi = 10.1198/106186006X133933 | last1 = Hothorn | first1 = T.| last2 = Hornik | first2 = K.| last3 = Zeileis | first3 = A.| year = 2006 | title = Unbiased Recursive Partitioning: A Conditional Inference Framework | jstor = 27594202| journal = Journal of Computational and Graphical Statistics | volume = 15 | issue = 3| pages = 651–674 | citeseerx = 10.1.1.527.2935 | s2cid = 6074128 }}</ref><ref name="Strobl2009">{{Cite journal | doi = 10.1037/a0016973 | pmid = 19968396 | pmc = 2927982 | last1 = Strobl | first1 = C.| last2 = Malley | first2 = J.| last3 = Tutz | first3 = G.| year = 2009 | title = An Introduction to Recursive Partitioning: Rationale, Application and Characteristics of Classification and Regression Trees, Bagging and Random Forests | journal = Psychological Methods | volume = 14 | issue = 4| pages = 323–348 }}</ref>
Line 93:
Boosted ensembles of FDTs have been recently investigated as well, and they have shown performances comparable to those of other very efficient fuzzy classifiers.<ref name="Barsacchi2020">{{Cite journal | url=http://www.sciencedirect.com/science/article/pii/S0957417420302608 | doi=10.1016/j.eswa.2020.113436|
title=An analysis of boosted ensembles of binary fuzzy decision trees| journal=Expert Systems with Applications|
volume=154| year=2020| last1=Barsacchi| first1=M.|last2=Bechini| first2=A.|last3=Marcelloni| first3=F.| page=113436| s2cid=216369273| url-access=subscription}}</ref>
==Metrics==
Line 234 ⟶ 235:
===Variance reduction===
Introduced in CART,<ref name="bfos"/> variance reduction is ofte
:<math>
Line 244 ⟶ 245:
By replacing <math>(y_i - y_j)^2</math> in the formula above with the dissimilarity <math>d_{ij}</math> between two objects <math>i</math> and <math>j</math>, the variance reduction criterion applies to any kind of object for which pairwise dissimilarities can be computed.<ref name=":1" />
Used by CART in 1984,<ref name="ll">{{Cite book
|last=Larose
|